Hmm.. so good that you are able to monitor response times of the application components but then how does that helps tuning the application... do not count activity.. show me the productivity.. so here I'm able to monitor my code and generate some couple of reports.. but then they didn't really helped me understand performance bottlenecks.. I would run them again and again to aggregate more reports!! No amount of diligence seemed to work.. imagine I was not able to find problems.. that reminds me of the adage 'bin mange moti mile aur mange mile na bheekh' ... I had understood there is sure shot knowledge gap from technical and business perspective... after all we are still far from creating tools which would scan and identify and fix code on their own. There are just so many options for every refactoring which one intends o make in the code.. one needs to weigh your options before making code change les it should fall under its own weight... Ok, I digressed because even while conducting my own research I chanced to read interesing articles which would give me a peek ino the future about gainfully utlizing concepts from autonomic computing to application monitoring as an aid to debugging applications... but 'Show me the results' was driving me crazy.. but then as the luck would have it a visit to the the local book shop helped me discover "Bitter Java" and as prophecized by "The Alchemist" when you one truly starts liking something the entire universe conspires to make it happen.. seemed so much true .. he "bitter ejb" chapter seems to have been written just for me a quick read with reports immediately helped me identify the 'chatty' interface between our application and the database.. the server frequent roundtrips are consuming far more time over the wire than conducting any productive work... steriotypically such problems occur when we make far more granular requests than is required or may be executing queries in loops... it helps to find initial symptoms of the problem at the macro level.. such an information can be suitably used in other low performing use cases.. bugs come and go, but I'm accumulating debugging skills ... I'm already feeling the need to study "Refactoring: Martin Fowler".. may be I should get back to reading.. more later
Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts
Thursday, March 12, 2009
Tuesday, March 3, 2009
Performance Monitoring Utility
My current assignment required me to create a light weight utility to conduct application monitoring during production and development phases to identify potential performance bottlenecks. The utility which I created is now ready and is under Beta. Key rationale to create this utility were;
To improve simplicity, reduce verbosity and hide inner working of the monitor code a facade to the monitor library is provided so all hat client code needs to do is;
It is a common observation that application performance tuning exercise requires an inter disciplinary approach and much is needed to understand the system performance in the right context. It was therefore important to understand the complete execution path under observation. To make this possible Composite pattern is implemented. The intent of composite is to compose objects into tree structures to represent part-whole hierarchies. The call tree is captured which displays the function execution paths that were traversed in the profiled application. The root of the tree is the entry point into the application or the component. Each function node lists all the functions it called and performance data about those function calls. So to put it crudely composite pattern helped me create the tree structure of application components under observation.
Now that we are ready with the basic infrastructure needed to monitor he code performance at all times the next piece of challenge is to create automated load test and as any one with similar experience will immediately identify that creating automated test cases is not that big a deal but maintaining them over a period of time is.. my next task involves around ways to put intelligence into our load test scripts... I woke up pretty early today .... ;)
- Create an unobtrusive (or minimal) way of monitoring the application, although in its current state it does not uses aspect technologies and thus requires us to manually inject code to those applicaion components which we want to instrument.
- The utility should be real lightweight so that it could be used at all times to monitor application performance and system health.
- Simple to use. Client code should be able to use the utility in an easy to use way. Figuratively speaking, just place your probes and desired parts of your application and assign a logically comprehend able name to it, that's all!
- User should be able set monitoring preferences.
- Reporting monitoring results.
- Currently the utility monitors the execution times alone but can easily be extended to monitor other metrics like memory, cpu etc.
To improve simplicity, reduce verbosity and hide inner working of the monitor code a facade to the monitor library is provided so all hat client code needs to do is;
public SomeObject theMethodUnderInstrumentation(){This is all that is required by the developer to instrument it's code. Simple isn't it!! The above becomes transparent to the developer in case we use aspects to define our pointcuts to inject a similar code as above.
try{
MonitorFacade facade = MonitorFactory.getFacade();
if(facade.isMonitorEnabled()){
facade.recordExecutionStart("someLogicalContextName");
}
// do some time consuming task here
}finally{
if(facade.isMonitorEnabled()){
facade.recordExecutionStop("someLogicalContextName");
}
}
}
It is a common observation that application performance tuning exercise requires an inter disciplinary approach and much is needed to understand the system performance in the right context. It was therefore important to understand the complete execution path under observation. To make this possible Composite pattern is implemented. The intent of composite is to compose objects into tree structures to represent part-whole hierarchies. The call tree is captured which displays the function execution paths that were traversed in the profiled application. The root of the tree is the entry point into the application or the component. Each function node lists all the functions it called and performance data about those function calls. So to put it crudely composite pattern helped me create the tree structure of application components under observation.
Now that we are ready with the basic infrastructure needed to monitor he code performance at all times the next piece of challenge is to create automated load test and as any one with similar experience will immediately identify that creating automated test cases is not that big a deal but maintaining them over a period of time is.. my next task involves around ways to put intelligence into our load test scripts... I woke up pretty early today .... ;)
Saturday, November 1, 2008
SOA-Reference Model
Read SOA-RM from OASIS
If you are the one who likes to understand the very basics and the core concepts associated with the Service Oriented Architecture you will find the above document one real treat to read, rest assured you can go with absolute clean slate, you actually learn a lot.
If you are the one who likes to understand the very basics and the core concepts associated with the Service Oriented Architecture you will find the above document one real treat to read, rest assured you can go with absolute clean slate, you actually learn a lot.
Friday, October 24, 2008
Web Services Platforms: Composition and Philosophy
Not surprisingly, contrary to common knowledge SOAP today stands for nothing... some one must be seariously embarassed to refer to it as 'Simple' :) There are so many specifications heard some one saying here are 105 of them!! That very moment I found it over zealous to study and all of them and considering my feeble mindedness, I'm sure to get lost in them. But then, what next?!! I just can't give up like that, after all I have a stiff nose to save ;) They say 'you are what you abstract it from' and may be that's right... may be I need to understand the 'big picture' and then drill down to details subject to interest and need. Here, today a first attempt is made to understand web service as a platform architecture and defining philosophies of some of its more popular implementations.
Web Services Platform Architecture
Web service technology provides a uniform usage model for components/services, especially within the context of heterogeneous distributed environments. It also virtualizes resources by shielding idiosyncrasies of the different environments that host those components. This shielding can occur by dynamically selecting and binding those components and by hiding the communication details to properly access those components. Put simply, these web services technologies serves as a toolset which can primarily be divided into three core subsystems:
Hopefully, this article helps us understand the composition of different WS implementations and make an informed decision in their selection for our projects at hand.
Web Services Platform Architecture
Web service technology provides a uniform usage model for components/services, especially within the context of heterogeneous distributed environments. It also virtualizes resources by shielding idiosyncrasies of the different environments that host those components. This shielding can occur by dynamically selecting and binding those components and by hiding the communication details to properly access those components. Put simply, these web services technologies serves as a toolset which can primarily be divided into three core subsystems:
- Invocation: Upon receiving a service invocation request over a supported transport protocol (HTTP[S], JMS, SNMP etc) a set of handlers need to to pre-process the message as per the QoS (quality of service) requirement (like reliability, security etc.) and then the target Java class (call it first language interference, I'm told something similar happens in other languages too...) is idenified. But before delegating the message for processing it needs to be de-serialized to Java objects and later the response is serialized back to XML documents, which are further handed over to transport layer for onward message delivery. Roughly the same happens on client side albeit in the reverse order.
- Serialization: is the process of transforming a Java object into XML element and the reverse process is called De-Serialization. Arguably, this is the most important step as it determines performance and flexibility of the web services platform, among other things. To accomplish this the serialization engine needs a set of 'mapping strategies' to serialize an instance of Java class into instances of XML Schema components. A 'mapping strategy' associates a Java class its target XML Schema type and a description of serializer that will transform an instance of Java into an instance of the Schema type (or vice versa). It should be noted that, Objects are serialized through a 'serialization context' and that the serialized form of object may differ based upon the "context", i.e. what object have been serialized before. Thus a 'Serialization Context' is set of 'Mapping Strategies' that can be used by serialization subsystem to implement the type mapping used by a particular Web Service deployment. Common type mapping mechanism are Standard Binding, Annotations, Algorithmic and Rule Based (need to explore further...)
- Deployment: This subsystem supports invocation of a Java target as a Web Service, which includes publishing he WSDL, configuring the end-point listeners and SOAP handlers, mapping WSDL operation to Java method calls and defining the Serialization Context for binding the WSDl operations to Java targets.
- JAX-WS 2.0: Assumes a uniformly available Java Universe and thus makes all efforts to make it increasingly simple for a Java Developer to rexpose their applications as web services with annotations and tool support to generate real robust WSDL. Java Interfaces forms the starting point.
- Apache Axis2: Backed by strong community support this implementation makes it easy to start from either a WSDL or a Java interface and Axis2 with new object model in place boasts of improved performance and flexibility.
- Apache CXF: Boasts of ease of use and very high performance because of using the new 'pull' parsing technique and the object model.
Hopefully, this article helps us understand the composition of different WS implementations and make an informed decision in their selection for our projects at hand.
Labels:
Axis,
CXF,
Java-WS,
SOA,
technology,
web service
Tuesday, October 21, 2008
Approach: How to improve traceability between BPMN process model and UML component model
Workflow can be modeled with a Business Process Diagram or an Acivity Diagram and can be transformed manually or otherwise using underlying matamodel. However, the challenge lies in finding convergence of object-oriented approach offered by UML and process-centric approach taken by BPMN. To put this in context, it should be noted that UML methods asks you to find the objects first using the static structure diagrams and only then build dynamic behaviour diagram to model object interaction. In an attempt to satisfy the stated need, we can model our 'draft' activity model based upon Business Process Diagram as specified by BPMN at the 'first attempt' and 'later' the 'technical team' can refine and refactor the activity diagram so created in few iterations between static diagrams (class diagram etc.) and activity diagram to model the dynamic behaviour, which can improve the traceability.
For a detailed study of the subject one may be interested to study "Process Modeling Notations and Workflow Patterns" here.
For a detailed study of the subject one may be interested to study "Process Modeling Notations and Workflow Patterns" here.
Labels:
bpm,
bpmn,
development,
software,
technology,
uml
Wednesday, October 15, 2008
BPMN in a nutshell
Business Process Modeling Notation (BPMN) is a standard specification created by Business Process Management Initiative Organization (BPMI) intended to provide a notation that is readily understood by Business Analyst (who creates the business process), Process Developer (who implements the business process into executables) and Business Owner (who manages and monitors he process). Thus, BPMN standardizes the bridge for the gap between process design and process implementation.
BPMN defines a Business Process Diagram (BPD), which is a specialized flow-charting technique to create graphical model for business process operations. The graphical model so generated is a network of directed graphical objects representing activities.
BPMN can be classified in to following four categories;
1. Flow Objects
a] Event - Events effect the flow of the process and usually have a cause(trigger) or an impact (result). Types {start, intermediate event, end}
b] Activity - The work done and can be further classified as Task(atomic), sub-processes(non-atomic or compund)
c] Gateway - They are used to model the convergence or divergence of sequence flow and can thus be used to model decision making, fork or joins.
2. Connection Objects: Using them, Flow Objects are connected together to provide basic skeleton structure of business process.
a] Sequence flow - models the 'order' of activity to be performed, it should be noted that 'control flow' is semantically incorrect in the context of business modelling language.
b] Message flow - models the flow of information.
c] Association - models the inputs and outputs of acivities.
3. Swimlanes: The concept of swimlanes is used to organize acivities into seperate visual categories to illustrate different functional capabilities or responsibiliies.
a] Pool: Intra-group activity for e.g. interactions between customer and supplier organizations can be clubbed using Pools.
b] Lanes: Inter-group activity for e.g. interactions between various department of the same organization can be modeled using lanes.
4. Artifacts:
a] Data Objects: Models the input or output form activities such as Rules, Documents for e.g. order
b] Groups: Models the logical grouping of sequence of activities, does not alters the sequence flow.
c] Annotation: Provides documentation.
BPMN can be used to model collaborations between two or more business entities which may be public in nature or business processes internal to an organization, the difference lies in the precision level between the two. The primary value add that BPMN brings to the table are;
1- Standards based.
2- Easily understood by the complete 'spectrum' of people
3- Designed to be easily transformed to the de-facto execution language standard BPEL4WS.
BPMN defines a Business Process Diagram (BPD), which is a specialized flow-charting technique to create graphical model for business process operations. The graphical model so generated is a network of directed graphical objects representing activities.
BPMN can be classified in to following four categories;
1. Flow Objects
a] Event - Events effect the flow of the process and usually have a cause(trigger) or an impact (result). Types {start, intermediate event, end}
b] Activity - The work done and can be further classified as Task(atomic), sub-processes(non-atomic or compund)
c] Gateway - They are used to model the convergence or divergence of sequence flow and can thus be used to model decision making, fork or joins.
2. Connection Objects: Using them, Flow Objects are connected together to provide basic skeleton structure of business process.
a] Sequence flow - models the 'order' of activity to be performed, it should be noted that 'control flow' is semantically incorrect in the context of business modelling language.
b] Message flow - models the flow of information.
c] Association - models the inputs and outputs of acivities.
3. Swimlanes: The concept of swimlanes is used to organize acivities into seperate visual categories to illustrate different functional capabilities or responsibiliies.
a] Pool: Intra-group activity for e.g. interactions between customer and supplier organizations can be clubbed using Pools.
b] Lanes: Inter-group activity for e.g. interactions between various department of the same organization can be modeled using lanes.
4. Artifacts:
a] Data Objects: Models the input or output form activities such as Rules, Documents for e.g. order
b] Groups: Models the logical grouping of sequence of activities, does not alters the sequence flow.
c] Annotation: Provides documentation.
BPMN can be used to model collaborations between two or more business entities which may be public in nature or business processes internal to an organization, the difference lies in the precision level between the two. The primary value add that BPMN brings to the table are;
1- Standards based.
2- Easily understood by the complete 'spectrum' of people
3- Designed to be easily transformed to the de-facto execution language standard BPEL4WS.
Thursday, August 28, 2008
Stories from the trenches
Reporting live from the "technology side of business" I find that the 'geek quotient' required for us to remain in the role of that of a 'technical advisor' to business has only rose higher and higher with an added expectation to empathize with business challenges and continuously provide technology tools for business to increase its productivity and profitability. We seek to achieve these goals by;
- using technology to reduce wastage and resource consumption in the business process
- provide high level visibility to the business performance to ensure process re engineering are carried out in a timely fashion
- monitoring key performance indicators to determine if the business processes are helping the organization to reach its goals.
- enable business owners in taking effective decisions.
If you are planning a career in the industry you need a good grounding in how technology management differs from traditional methods. Writing code is so year '1998. The important skills are;
- domain knowledge: this is the most important attribute, I strongly believe unless you do not understand the history, politics and economics of the software development activity in your current project you are like a labourer who is merely digging the soil rather than a labour who is digging soil to lay the foundation stone. Associate the purpose with your job, it will always help to use information technology and electronic commerce to reduce costs and open up new market (SAAS, PAAS delivery models etc. more on this later...)
- data modeling: good that you know XML as a technology, but that adds no value to the business if you are not able to create effective data models using which much of the information flow can remain native to the system without ping-ponging between marshaling and marshaling thus saving precious computing resources without making them platform dependent, thus, creating an infrastructure stack which understands and processes the 'same' object model to support truly heterogeneous distributed computing.
- rules: all these years you have been writing those plumbing code which soon becomes ugly and stinks badly but then you thought it is the business logic and where else do they stay, aren't their ways to externalize business logic from code so that they can be changed dynamically and easily when needed without going through the painful maintenance cycle. Believe me even business look upon them as bottlenecks !! so it is loose all situation. If you want more agility start with a rule based architecture where in you extract the frequently changed business logic into Rule set and Decision Tables using the rules framework like JBoss Drools or Jess.
- AOP: Aspect oriented technologies seek to cater to the cross cutting concerns of an application, much literature can be found on the web, the key point I want to drive home here is that if one is in the process of creating a new system (s)he can focus on solving the core problem without adding features which he is not too sure and which can always extended later using AOP techniques or that in an already existing system by adding a dimension to your application.
Subscribe to:
Posts (Atom)