Mobile Menu

Loading Events

« All Events

  • This event has passed.

Continuous Discussions (#c9d9) Podcast: Episode 32 – Orchestrating Your Testing Suite

January 12, 2016 @ 8:00 am - 9:00 am

electric-cloud-logo

Last Tuesday I participated in an online panel on the subject of Orchestrating Your Testing Suite, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps.

Below is the footage and a few excerpts. For the full event page, see the link above.

Footage

Excerpts

What does your test matrix look like?
I mostly worked on server-side applications in the past few years, where the environment they were running on was fairly prescribed, so the variations in operating systems and hardware wasn’t a big deal. The last company I worked on was an enterprise with services provisioning and orchestration, and we we’re talking to a lot of external subsystems. Those had their own APIs, some of which were very well encapsulated and documented, and some of which weren’t. I would say a lot of the hard work of predicting our test matrix and managing had to with these external system dependencies.
In our project the operational reality was that some of these systems were really big, and we didn’t have the license or resources to virtualize them, and so they were sitting at a partner side, and they would need to take their systems up and down. Sometimes, yes we did virtualize them, and if that’s feasible that’s great of course, but a lot of the time we had to kind of work our way through issues with systems being down, or the changes in the interface not being documented. We sometimes used up services to approximate behavior and to use as kind of a fall back. But a lot of it was about frankly leaving some slack where we would be able to say “Ok, system was offline on Tuesday, we’ll come back on Thursday and test it”. Those are the X-Factors we had to make a little bit of an allowance and budget for us while we’re figuring out what to do.
There was no silver bullet, the longer we’re working on these systems, the more we learned about them. We got better understanding their intricacies. It’s an 80-20 thing, where the 20% were the tricky ones. So we did those first, less slack where we could in the schedule to deal with them. Certainly we were trying to work with the partners and customers to get access to the systems. Getting access to the systems was hard, sometimes these were systems that weren’t in massive circulation. The APIs wasn’t super well documented, so we would try and get questions answered about, ‘Seems like something changed, what is that and why, what do we need to know about it?’ It was person to person issues trying to get that information.
How do you define the pathway through your test cycle?

My main area of focus – I look at how is the user interacting with this feature, and how do we co-creating a narrative with developers so that we have good input that we can use figure out what to test, how do we manage depth and breadth. With our systems that was always a big challenge, we have a hundred things, so how do we avoid testing the first 20 things, with a lot of depth and running out of time. In our team a lot of it was distributed. We not only tried to create opportunities to co-create quality narrative to figure out both development and testing, but then we also had to figure a way to encapsulate that that so it was somewhat durable, as it makes its way to someone who couldn’t be in that discussion and practice. We made some progress but it was just challenging. For every way it could go right, there were ten ways it could be mediocre. So we we’re always challenging ourselves to how to do that better and bring better narrative to more people, so they could focus their test efforts at the top of the functional interface layer.
When that came into play, the best thing was stubbed out services, so to an extent, we simulate some of these things, at least in a way where many of the keys and interactions could get through a lot of the time. That’s what we would do when we had a lot of problems with the systems. Other than trying to get to those early and often, which was obviously a good idea in theory, but in practice, lots of other stuff was going on. Other than that, including the stubbed out things to be able to work though it to a degree, that probably that one that helped most.
How do you manage test environments and data?
At my last job, virtual environment were very helpful. The other thing I learned is to go out and get a high quality input. So we knew that we’re going to build something, test something that was valuable to the customer. One of the things that I learned to do better early, the hard way, it’s worth it if you’re out in the front-lines, the user is your proxy to get really high quality sample data that matches things that users are going to do in the real world. The reasons why it’s important are probably pretty obvious. It’s something that when you don’t do it well, you pay for it in multiples downstream, and when we did do it well it was really helpful, help us ask the right questions and organize and execute the right test early on.
How have testing needs changed over time?
They probably should’ve been changing more. We we’re mostly focused on getting better at the things I mentioned. Bringing better narrative in, getting better data into the beginning of the development and test cycle. Scale mattered, the environments that we’re able to spin on AWS and so forth, did a pretty good job at allowing us to bring up variations where we needed to. Other than that I don’t think I have a really exciting answer to that one!

Organizer

Electric Cloud
View Organizer Website
Back to Top