|By Richard Park||
|January 2, 2013 08:30 AM EST||
When IT people think about application performance monitoring, they're usually thinking about which metrics they should monitor. Some examples of resource metrics may include CPU utilization, disk queue length, and thread pool size. Examples of performance metrics may be application response time, responses per interval of time, and concurrent invocations of an application.
"Modeling" is probably not the first term that comes to mind when considering application performance monitoring. But, in fact, "modeling" is exactly what a "domain expert" does when he decides how application components are related with one another, and which metrics matter in gauging application performance.
The problem for IT organizations is to extract this type of "institutional knowledge" from a handful of experts to make it accessible and relevant to more people in IT Operations and Application Support. So whether you are talking about a complex approach like using UML diagrams, or something easier to grasp like calculating workload for your monitored elements, a model is simply an abstraction of best practices to make it easier to understand application performance.
Gartner underscores the importance of modeling in its analysis of the APM market. Its Magic Quadrant for Application Performance Monitoring discusses five functional dimensions, one of them being "runtime application architecture discovery, modeling, and display." This is the discovery of the hardware and software components of an application and the communication paths connecting these components together. Put even more simply, one of the key criteria for a good APM solution is to discover and create an accurate model.
Let's go through a brief example of why application modeling is so important for performance monitoring, and why Netuitive put so much effort on this in our recent Netuitive 6.0 release.
A typical Java application runs on an application server such as Tomcat, JBoss, WebSphere, or WebLogic. Because the application is distinct from the application server and JVM, it makes sense to model these as separate components.
The application has performance metrics such as response time and responses per time interval. The application server has JVM resource metrics such as CPU utilization and thread pool size.
Traditional "monolithic" models of performance combine metrics for an application and its application server into a single entity. But this monolithic approach makes it more difficult to model a scenario where multiple applications run on the same application server.
The monolithic approach is also not as intuitive if you want to quickly see if there is a problem with an application. It is straightforward to mark an application as "red" if its response time is increasing and to mark an application server as "red" if CPU utilization is high. But if resource and performance metrics are combined together, do you mark an application as red if CPU utilization is high? It isn't clear. High CPU utilization may not necessarily affect application performance, but you still want to know about it from a resource utilization perspective.
But a "monolithic" model is no longer appropriate for today's distributed enterprise applications. A modern Java application runs on multiple application servers in a clustered architecture. The cluster provides increased scalability and redundancy as more cluster nodes are added.
The most typical way to model an application cluster is as a cluster entity that contains multiple application servers.
This model focuses primarily on infrastructure, where one can determine if resources are evenly distributed among cluster nodes.
You can also adopt a more "application-centric" model by creating a cluster that contains only the applications.
This model provides more visibility into total application throughput and average response time. It focuses mainly on application performance throughout the entire cluster.
The bottom line is that a good model is essential for understanding and evaluating application performance. Today's distributed enterprise-class Java applications is more complex than ever, and depending on the "institutional knowledge" of a handful of application support experts is risky. Predictive IT analytics have now advanced to the point of eliminating this risk by condensing modeling best practices into templates that define which metrics matter, and by distilling the analysis of these metrics into composite health and workload indices.
To learn more about how this all works, check out our white paper on monitoring distributed Java applications.
- "All It Took Was One E-Mail to Larry," Says Former eBay Research Director As He Moves to Google
- Google Ramps Up Its Mobile Reach: Launches "Mobile Web Search"
- VoIP Update: Yahoo! Buys DialPad
- Ericsson + Napster = World's First "Wireless Digital Music" Brand
- SYS-CON i-Technology Podcast August 30, 2005
- Free Guest Passes for the SOA World Conference & Expo in NYC
- A Flair for Food - Health-Conscious Cooking Is This Chef's Cup Of Tea
- Sony PSP May Feature Porn
- Kapow Helps Seiko UK, Provides SMS Text-Alert Services
- South Korea is World's Largest Phisher
- Will the Mac OS Now Be Offered by Dell?
- UK Targeted for Trojan Attacks