Skip to end of metadata
Go to start of metadata

Product Overview

Optimize for Infrastructure is a tool produced by webMethods for keeping track of

  • Internal Server Metrics, eg. queue sizes, RAM usage, server heartbeat etc
  • Custom defined business information, eg reporting in each transaction

A typical use of Optimize for Infrastructure is when we have a webMethods environment consisting of one or more Integration Servers and one or more Broker Servers and we wish to monitor their overall health.

It uses several components which work together to monitor the infrastructure

When all pieces are installed, a typical Optimize for Infrastructure physical deployment looks as the following

Optimize is a fairly complicated product with many different "moving parts". The separate components shown above which are needed for it to function are

  • Analytic Engine
  • WS Data Collector (optional)
  • My webMethods Server
  • Infrastructure Data Collector (InfraDC)
  • Broker
  • Analysis Database

Beside the database, each of these components is its own separate process running inside of the OS.

Installing Optimize for Infrastructure

All of the above listed components are installed via the webMethods installer tool, except the database which needs to be setup independently. In Fabric 7.0 You would need atleast 2 schemas for Optimize, 1 to run the Portal and 1 to run the Analysis engine. In Fabric 7.1 where Portal has been fixed up, you can host everything inside of 1 schema, simply run the database tool and create "all" tables and point everything at that instance.

If your installation does not include the Optimize components, you can make a fresh webMethods installation into a new directory, just installing Analytic Engine and WS Data Collector. This works fine as all components communicate over regular TCP/IP sockets.

A working MWS Portal needs to be functioning first with the Optimize screens installed. If you are running Optimize on its own machine, you can get by in putting both the Optimze and the MWS tables into the one schema, although this should be used cautiously for production.

Once the MWS is running, you need to start up the Analytic Engine and WS Data Collector.

For Analysis run

webInstallDir/optimize/analysis/bin/startupAnalyticEngine.sh

and for WS Data Collector

webInstallDir/optimize/dataCollector/bin/startupDataCollectorEngine.sh

It is also a good idea to create a new Broker Server, running on its own port for the Optimize components to use. Ideally all the components pictured above (except the monitored IS and Broker at the top) would be installed on a separate machine to the production environment.

Before Optimize can be used, inside of the MWS we need to use a set of screens called the "Central Configurator". On these set of screens you create an Optimize "deployment" which consists of a MWS, broker, analysis and wsdc and combines them into the runtime configuration shown in the above diagram. Don't get confused by the diagram below, this is just how the "Central Configurator" connects to the components to hook them up. After all this is done we "formally" hook the MWS into the Analysis Engine so we can use the screens to display the data.

If you havn't already done so, create a database for analysis using dbInstaller tool. Get your DBA to create you a schema and then install the "PRODUCT - Optimize" into this.

Next tell the Central Configurator where the Analysis DB is by creating a DB Connection pool from MWS Administration -> System-Wide -> Environments -> Database Pool Configuration.

Now define an environment from "Administration -> System Wide -> Environments -> Define Environments. Click "Add Environment" and give it a name and description. Next click on the name of the new Environment and this begins the configuration. The following tabs should be clicked in order

  • Design Servers: Choose the optimise for infrastructure template.
  • Configure Servers: Most default settings are ok. If you intend on sending out SNMP alerts these settings can be added inside "Analytic Engine v7.0.0.2 -> SNMPAlert Settings".
  • Define Hosts Enter the names of the machines which the different components run on.
  • Map Servers: Connect each logical server to the machine name it resides on.
  • Map Endpoints: Make sure all ports are correct for this installation. NOTE: Make sure you change the Analytic WS port type to "https". This is the port the MWS will eventually communicate with the Analysis Engine with in the top diagram.
  • Validate: Run this tab to check everything is ok. Now click "Finish" to take you back to the Define Environments screen.

Click on the green arrow on the right of your environment to go to the "Deploy" screen. Choose option "Deploy All". If you need to change a setting, come back here, update it and do this task again.

This deployment creates a new port open on the Analytic Engine runtime which you can now hook the MWS into via the "Administration -> My webMethods -> System Settings". Dont forgot to select "SSL" as you chose HTTPS above.

Now the runtime architecture functions as the diagram. The WS Data collector was setup with the location of the broker to send data back in the first step.

The WS Data Collector publishes a WSDL file which any custom code can use to report back metrics and have them fed into the Analytic Engine.

System Data is captured via the Infrastructure Data Collector. This is a custom IS which can be pointed to Brokers, IS and general SNMP sources to monitor their health and reports back its information via a JMS queue on the broker. Once you have configured and deployed your analytic environment, you need to startup the InfraDC which sits inside of the "ManagerServer" or "InfrastructureDC" directory of your webmethods installation. This is just a cut down IS and is started in exactly the same way via the bin/server.sh command. You might like to start it on a different port as it wants to use the default 5555 so example

webInstallDir/ManagerServer/bin/server.sh -port 5556

Once the server has started, log into it via a Web Browser into that port. InfraDC sends the data to the Analytic Engine via a JMS queue on the broker. You configure which queue to send it to inside of the "Settings". The Analysis server seems have "Broker #1/analytics" hard coded in a config file, I found it easier to just create this Broker Instance and connect InfraDC to it.

Selecting Components to Monitor

Once logged into the MWS goto Administration -> Analytics -> Monitored Components -> Discovery. On this page you can enter the details for specific Brokers and IS's you wish to monitor. Enter their details here and then Optimize has verified them all, they appear on the "Assets" page.

Now you can go to the "Monitored Components" tab and see a list of areas you can specific for monitoring. For each, you can click on the linked blue name to the left. For example, click on "Integration Server" and on the next screen are two empty blue boxes. In the top you select the components to monitor, in this case which Integration Servers are available from the "Assets" page and in the bottom box you select the specific metrics regarding that component. For example you can select "Free Memory" as a metric to monitor for an Integration Server. Note that when you monitor any metric, you automatically get a metric called "Object Status" which is the components "up or down" status. In this case the Integration Servers Object Status can tell you if the server is up or down.

Once you have monitored some components and selected the metrics to monitor, you need to wait a few minutes for the Infrastructure DC to send that data back to the Analysis engine. The default is 4 minutes and this can be set from the Admin page of the InfraDC (remember its just an IS with a primary port in the same way)

Viewing the System Overview

Now data is being collected from your servers and being fed into the Analysis Engine. You can now go to "Monitoring -> System Wide -> System Overview" to see a dashboard of the metrics you have chosen to monitor.

At first not all metrics may be displayed, the MWS enforces a 200 result limit. You can either enter a specific search criteria into the search box or goto the "options" tab and turn off a maximum limit to show all metrics.

From the tree view you can now drill down and view the individual metrics you selected in the previous step. Click on a metrics name to see a graph of the value.

Creating Rules Around KPIs

Now we are storing the data we can make a rule to alert us when data falls outside of our boundaries

Goto "Analytics -> Rules -> Rule List" and create a new rule

For one time metrics, a "Rule Type" of "Event" is better.

"Rule Expression" is where you create an equation for your KPI against a value, ie selecting Free Memory and then wanting to know if it is less than 400 for example.

Click on "Edit" gives you a wizard to help build up the expression. From "Category" select one of the "Components" which you configured to be tracked before, eg "Integration Server". KPI now changes to a sub category of "Component".

Select the KPI you wish and the comparitor value. Knowing what is a good compartor value may take some trial and error for this particular KPI.

Users and Alerts

If you wish to have someone notified via email when this rule is violated, create a user in the MWS system and provide a valid email address. When you do this you will be able to click the "Add Alert" button from the rule screen and select a user to recieve the email.

Actions

Setting up SNMP alerts from rule violations is covered below.

Rule Instances

Once you have saved your rule, a new box appears down the bottom of the edit rule page "Rule Instances". This gives you a summary of the items which match this rule, for example if you have chosen to monitor IS status, it will tell you how many ISs it is monitoring. If you are looking at Queue size, it will tell you how many queues it is monitoring.

Clicking on "View Instances" gives you a breakdown of these monitored instances and shows you which (if any) are out of compliance. You can click into each instance to see a graph of the history of that item.

One of the aims of Optimize is to have it automatically report when there is an issue with your monitored infrastructure. One way of doing this is to have Optimize report via SNMP back to a central reporting system. 

Generating SNMP Alerts

Optimize can be configured to fire off an SNMP alert when a rule is violated. This allows you to send a message to a corporate reporting platform which may already exist inside of your organisation

Configure the SNMP Server

You need to configure SNMP for the Optimize instance first. Go to MWS -> System Wide -> Environments -> Define Environments and click on your Optimize environment. Under "Configure Servers" open up the Analytic Engine and find the settings for "SNMPAlert Settings". In the text field to the right you get to edit some XML to specifiy the SNMP servers location.

Firstly dont forget to remove the comment lines around the XML.

Manager Name:

Is an internally used name to denote this SNMP server

_Host: _

Host machine of the SNMP Server

Port:

Port of the SNMP Server

Community Handle:

Dont know yet, waiting on webm support

Community Password:

same

If you have multiple SNMP servers for different rules you can specify extra "<property name="SNMPManager">" tags.

Save this page and then click "Finish". Back on the environments page click on the green arrow to the right of your environments listing. On the next screen click "Deploy Updates". Now you need to restart the Analysis Engine.

yourwebmpath/optimize/analysis/bin/shutdown.sh

nohup yourwebmpath/optimize/analysis/bin/startupAnalyticEngine.sh &

Configure A Rule To Use This SNMP Server

Once the SNMP server has been configured and the anaylsis server has been restarted, create a new rule as per normal or edit an existing one. Now down the bottom of the edit rule page the button "Add Action" is now enabled. Press it and select the SNMP Server you configured in the previous step. Click save for the rule and you are done. Now when a rule violation is triggers, as well as any user emails you have configured, an SNMP trap will be sent also.

Labels
  • None