Recently i had a discussion with my old college friends on Facebook, and we were posting technical questions that would tickle our brain and help us re-live our college days…
Following was my questions…. Posting it here as it might help the reader of my blog understand one beautiful characteristic of NP Complete problem.
Q: There are sets of problems, if i we find solution to one of them then we have solutions to all of them, what is the technical name for these problem class?
There were multiple reply to it, one interesting one was design pattern…. Then the discussion went into clue (TSP) then we arrived at NP-Complete. Then the discussion went further into exponential Big-O time etc… But I later sensed the wow factor the question had was gone as part of our discussions… I did some study to bring back the wow factor the question had. Following was my reply…
The real WOW factor that I had when I read these things in my college days is, “WHAT!!!!!! If I solve (in computer terms produce an effective Algo) for one of the problem in the problem class I can solve all of them? How nerdy would be the finder of such things be??? ” [I doubt if I have this level of curiosity now:) ]
These are the current characteristics of NP-Complete problems.
1) There is no known ALGO which can complete in polynomial time (in computational terms no BEST solutions ie. Best in the order of N….- >LogN
2) If we can find solution (Algo –reasonable solution) to one of them you can solve All of them. (the real YAHOO!!! is here!!!}
It means a one NP-Complete problem can be transformed into another NP Complete problem in a polynomial time, isn’t that great. That is what the theorem claims.
It means if I solve one of the NP complete problems I can solve all from the NP Complete list…. http://en.wikipedia.org/wiki/List_of_NP-complete_problems . In other words if I solve the NP complete version of Traveling Sales Person problem I can transform all the other NP complete problems to traveling sales person problem and solve it in polynomial time.
i.e … TSP == Solvable in polynomial time. Then I have solution (best Algo) to NP Complete version of Hamilton Path problem , which no one else has…. Because Hamilton Cycle Solution = Transform2TSP (Hamilton Cycle) (polynomial time) + Solve TSP = (Hence NP complete Hamilton cycle problem is solved in polynomial time) …
To state it practically….if the Subset problem that vels mentioned is NP Complete then it can be converted to the corresponding TSP version in polynomial time and viseversa. (Yahoo!!!!!!!) . And if i have an solution (again a solution for computer programmers is good algo to solve the problem) to the subset problem i can solve TSP…
Java 5 has provided architects to scale applications memory wise based on the charactersitics of the application’s memory usage pattern.
Java Garbage collector basics
The default GC of java is a serial collection GC. i.e when java decides to do the GC your application threads are suspended until the GC thread finishes.
On a single processor machine, this type of GC is good , but on multiple processor machine this is a kill. Imagine you have your Jboss or IBM WS that runs for banking project , for sure there would be a high hardware investment with muliple processor (not less than 12 processor machine).With this dedicated setup with serial collection your application that ran on 12 processor stops and only one processor is used for the GC activity. Ur applicaitn is in hault. So the throughput of the applicatino is directly impacted by your GC and it worsens with the increase in processors.
So it is a must to customize the GC collections , But remember until you understand the intricasis of the Java Heap and GC dont meddle with the GC collection,leave it to default because a non-expert is more likely to spoil than to increase the throughput.
See the throughput distribution in the below graph..
Java garbage collection design
What would you do if you are given a chance to decide the GC. IF you have a serial algorithm to sweep to all the objects in the memory and then dealocate the unreferenced objects then the BigO of the algorithm you design is directly propositonal to the nubmer of objects in the memory. So the time complexity of the algo you design will worsen for larger system.
How Sun Microsystem gets across with this Time complexity issue???
As far as memory conceptions is concerned based on research it is identified that the young object has the highest probability to die first. That means if an object is created recently it is more likely to die first than an object that has survived for a while. Current GC algo efficiently uses this principle of memory usage to product better BigO numbers.
Entire Jave Heap is seggregated into multiple segment to take advantage of this young die first fact.
The figure shows how the heap is seggregated, The entire Heap is seperated into Young, Tenured and Perm space.
GC algo is split into minor and major runs.
Minor run does GC only in the Young space, and major run does GC on Both Young and Tenured space. That is Major run is the maximum time a GC could take and we dont want this to run that often. To avoid doing the major runs , Java GC uses the Young die first fact and runs GC on the Young space. IF the object survives the run it is moved to Tenured. When Tenured is filled then Major run is triggered. This means major run is mostly avoided.
What this implies for architects?
Intelligently manupulating the young and Tenured size we can inpact the various characteristics of the application.
1. Frequency of GC runs.
2. Time taken for the GC to complete its run.
3. Throughput of the application.
Im not going to explain why it is impacted, readers are expected to understand the relation at this portion of the turtorial.
Java 5 provides you ablity to maniupalte the relative size of the memory segments.
Yes i agree the throughput problem of the serial collector is still open. Java 5 has allowd us to tackle this by providing 2 alternative GC Algos to the traditional Serial Collector method.
1. Throughput collector
2. Concurrent Low Pause Collector
I will attempt to give a short decription of the Above collectors
1. Throughput collector
The throughput collector is a generational collector similar to the serial collector but with multiple threads used to do the minor collection. The major collections are essentially the same as with the serial collector. By default on a host with N CPUs, the throughput collector uses N garbage collector threads in the minor collection. The number of garbage collector threads can be controlled with a command line option (see below). On a host with 1 CPU the throughput collector will likely not perform as well as the serial collector because of the additional overhead for the parallel execution (e.g., synchronization costs). On a host with 2 CPUs the throughput collector generally performs as well as the serial garbage collector and a reduction in the minor garbage collector pause times can be expected on hosts with more than 2 CPUs.
2. Concurrent Low Pause Collector
The concurrent low pause collector is a generational collector similar to the serial collector. The tenured generation is collected concurrently with this collector. This means the pause in the application is close to nil.
Aspiring memory manupulators:)
To start with just observe the memory conceptions of your software system.
java -verbose:gc xyz.jar
[GC 325407K->83000K(776768K), 0.2300771 secs]
[GC 325816K->83372K(776768K), 0.2454258 secs]
[Full GC 267628K->83769K(776768K), 1.8479984 secs
Gather enough understanding of the GC behaviour of your system against the hardware. Just remember a system that is best in Single process will be a pain in Multiprocessor. And the system that is in good in Multiprocessor will be a kill in single. And the effeciency also differs with the applicaitn characterstics. So leave it to default until you are confortable with the details.
So GC tuning by architects is a ever on task through out the lifecycle of the project. And it requires practics.
As always technical queries alone accepted regarding GC tuning at firstname.lastname@example.org
Sun has done an excelent work in integrating remote management into JVM. They have build in SNMP Agent like capablity into the JVM. The SNMP OID is synonomous to the MBeans and SNMP MIB is synonomous to the MBean Server.
This beautiful capability allows you to connect to the JVM and monitor the crime we have done in the code with regard to the run time memory/CPU and thread utilization. I became a lover of this feature. I always thought to build this capability into the applications we build. This feature is a real bliss for architects.
Head on Jump
To have a first hand experience with the JVM instrumentation , just follow the simple steps…
1. Have Java 5.0 installed in your system and create an java program that runs until you forcefully stop it. If you have a framework with threadpools, resource management etc.. it is a good example.
2. Run your java program with this additional parameter.
Java -Dcom.sun.management.jmxremote xyz.jar
This command publishes the MBServer in the JVM as a RMI resource for the Jconsole to connect to.
3. Start Jconsole and connect to the Program in your connection dialogue box.
Jconsole will alow you to monitor the Memory used and the threads used. Identify deadlocks etc etc..
See the peformance scaling for an Enterprise Information Integration framework that i recently wrote.. This is a heavy ETL and EII kind of tool that aggregate data from multiple databases. You would difinitly need to have some performance numbers for its production run. jconsole tool would help you prove the roboustness your ur framework by showing the memory and thread occupancy and how controlled they are admist heavy load on the framework.