GSI Scout System Interfaces: A Performance Testing Story

Over the past several months GSI’s team has been busy adding a number of new system interfaces to GSI Scout as well as extending the existing interfaces that were already in place. Until recently GSI Scout had a fairly proven interface with the Outage Management System (OMS) that has been in place with a large Investor Owner Utility for the past several years. This interface is responsible for processing messages generated for all of the outages created, updated, or deleted by the OMS and can easily handle 10s of thousands of messages daily.

The newest effort involved extending this interface to send updates back to the OMS, as well as adding new interfaces with crew management systems and field force automation software. Making it easier for field crews that may be doing restoration work or damage assessment collection. These new interfaces really round out the offering GSI Scout provides for a complete damage assessment tool that can be deeply integrated with other systems in a utility.

The development of these interfaces took many months, but the real important part of the process are the efforts we’ve done during stress, load and performance testing over the past couple of months. These types of tests involve many different people both from our organization and from our utility partner’s organization, covering many different disciplines in many different groups. They are fairly effort intensive to arrange and carry out, but the results of these tests, if done correctly, are priceless.

If you are not familiar with GSI Scout, I’d highly suggest you contact our sales team for a demonstration. In short, GSI Scout is a damage assessment and analysis software solution that provides rich field client software running on Windows, Windows Mobile, iOS and Android that can be used by damage assessors in the field to collect damage assessment data. These apps are native apps written specific for each platforms so they provide a very nice and full-feature, disconnected environment for users to use in the field. Not your average HTML5 apps, we’re talking native apps that perform very well and allow you to take your GIS map data out with you and mark damage assessments directly from that map, or collect damage assessment data for features that may not be represented on your maps. GSI Scout also provides a cloud based web portal that provides outage display, detailed reporting, as well as deep analytical features that provide projections for things like material and crew requirements even after only a small number of assessments have been complete. There is a lot of software here, much more than I can write about today – contact us if you need more information.

Architecture Overview

To begin, I’ll provide a brief overview of the system interface architecture. The first important point here is that for the large IOU we’ve been working with, GSI Scout is completely cloud based. All of the GSI Scout components reside on Microsoft Azure and the interfaces we’ve put in place between GSI Scout and the utility’s on-premises systems (OMS, FFA, WMS, etc.) are all fully operational and utilize encrypted communications across the internet between the cloud and their on-premises counterparts. The GSI Scout module responsible for integration with other systems I is called the GSI System Bridge, and it contains 2 components; the GSI System Bridge Web Service and the GSI System Bridge Message Brokering Service. The Web Service provides an interface for other systems to send data to, or to make requests of, using a request/response type messaging protocol. The Message Brokering Service sends messages to other “subscribing” systems when specific events are generated within GSI Scout. For example, when an assessor marks an outage as “Assessment Complete” an event is generated within GSI Scout and the Message Brokering Service will send an XML message to any external system that has subscribed to the “Assessment Complete” event, which could be the OMS in this particular case. These two interface components provide a number of different interfaces that can be utilized with many different external systems within a utility.

Determining the Scale

In a large storm scenario where thousands of outages can be processed, performance is the most important aspect of an interface. If the interface can’t keep up with demand, the system will come down and that’s not something you want to happen in a large storm. Performance and load testing is the only way to insure that this won’t happen. To stage these simulations, everything had to be automated. We weren’t interested in testing the web portal or the field client software because we already had a number of users testing those elements, as well as automated tests that exercise various features as part of a nightly process. What we really wanted to focus on was the interfaces themselves, so all of the operations performed by the users had to be automated so they could be played back time and time again.

So this involved 3 types of automations:

  • Customer calls generating outages in the OMS
  • Outage assignments and dispatches to the users or crews
  • Outage assessment “completions” by the field crews

With these three automations in place, all of the interface points could be exercised, and loaded to any level required. The next step was to determine what loading level requirements were needed to simulate a large storm

  • How many customer calls?
  • How many outages to dispatch to crews, and at what rate?
  • How quickly could outages be assessed by the users in the field and at what rate would that occur?

The good news is that in both groups items 1 and 2 are fairly well known. Most utilities already have very good metrics on call volumes and dispatch rates, and those two items can be “replayed” over and over again to test these types of interfaces. What we really needed to build additional automation for was the process of completing assessments on outages and determining the rate at which those assessments could be completed.

We estimated that on average damage assessors could “complete” assessment of 4 outages per hour; one every 15 minutes. That would involve driving or walking to the outage, assessing the area, recording damaged equipment or vegetation, and then marking the outage as “assessment complete” and syncing the field client database. (If your assessors can’t do 4 outages per hour, then you really need to call our sales group and talk to them about our field damage assessment software). With that metric in place, we can scale based on the number of assessors. For a very large storm you might have 300 assessors and if they are all assessing 4 outages an hour the rate of assessments complete would be 1200 outages assessed per hour. With this in place, we can start to automate the process of simulating a very large storm response to exercise these interfaces in a fully loaded state

In our next blog, we will break the performance and load testing down into a number of different phases so that we could initially focus on one interface independently to see if there were any bottlenecks before involving multiple interfaces.

Share:

More Posts

DISTRIBUTECH 2022 Conference Recap

Our team had the opportunity to travel to Dallas, TX for the DISTRIBUTECH International 2022 Conference! Every year, DISTRIBUTECH International serves as the leading transmission

Receive the latest news

Subscribe to the GSI Newsletter


By submitting this form, you are consenting to receive marketing emails from: GeoSpatial Innovations, Inc., PO Box 465, Washington, PA, 15301, http://www.gsiworks.com. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact