Monitoring HYBRID MONITORING TO IMPROVE SERVICE QUALITY
Avoiding problems is better than solving them...
Many call it Big Data, others Industry 4.0, but we call it by a less revolutionary name: Hybrid Monitoring.
Simply put, this means that the amassed data is analyzed centrally to make well-grounded decisions and to quicker solve upcoming problems. With IT infrastructures, the volume of information measured by different sensors is related with data from other operating processes such as change management, patch management, risk management and capacity planning. Ideally this allows for individual decision-making and tasks to be completed autonomously.
Because of its open architecture, the monitoring system netsensor which was developed in cooperation with the company logifab allows for various operating processes to be integrated into one central data processing platform. Therefore we are able to evaluate all collected data independent of their processes and to continuously improve our operating quality.
Proactive Hybrid Monitoring on all operative levels
Anomalies and failures are immediately registered and analyzed. Usually the failure is eliminated without the users of the impacted services realizing the issue at all.
Every monitored service parameter and occurring events are logged and can be queried via the multitenant management console.
Using an intuitive web user interface, data and measurement information can be viewed and service level agreements can be controlled.
Early detection of performance shortages to efficiently plan capacities
Control of line quality. Monitoring of network bandwidth and usage. Real-time traffic flow surveillance and anomaly detection to block incoming Denial of Service attacks in good time.
Monitoring of system resources such as power allocation of hypervisors, CPU and memory, workload of VMs, i/O latency and read/write rates of disk-based systems.
Resource consumption monitoring of LAMP or IIS/ASP based apps to control software efficiency and scalability.
Monitoring of database performance parameters to tune database applications.
Ongoing simulation of end user transactions
Controlling application availability and performance by simulated user transactions in order to monitor service level agreements.
Simulation of web browser requests to measure loading times. Detailed waterfall statistics of embedded HTML objects on a web page. Visualization of response information such as cookies, cache control, encoding and status code.
Comparison of measured loading times via CDN and directly from the origin.
Quality and acceptance of the monitoring are determined by event processing
Avoidance of false alarms by intelligent decision-making logics in order to detect and distinguish temporary and causal events.
Individual escalation scenarios with scheduled notification schemes, which can be further extended with software-based measures to automatically resolve errors.
Calendar-based schemes allow for individual escalations during office hours and service standby.
Webchat module to communicate and coordinate involved staff of the customer.
Transparency builds trust
SLA reports to document guaranteed performance. Statistics and trend analyses of the agreed quality key performance indicators, occurred events and recovery times.
Analysis of measurement anomalies by means of interactive charts including time slider navigation of the recorded measurements history.
Ongoing risk evaluation of the monitored assets based on configured security requirements and relevant security events.