Friday, February 23, 2018

Assignments in Intent

We're testing a scenario across two teams where two major areas of features get integrated. In a meeting to discuss testing some of this in end to end manner where end to end is larger, we agreed the need to set up a bunch of environments.

"Our team sets up 16 windows computers in the start state" seemed like an assignment we could agree on.

Two days later, I check on progress. I personally installed 3 computers on top of what we agreed to be what my team would do, and was ready to move on to using the computers as part of the scenario. The response I get is excited confirmation of having the rest of them available too.

The scenario we go through has a portal view into the computers installed, and checking if the numbers and details add up, I quickly learn that they don't. Ones I set up are fine. All others are not fine. We identify the problem ("I forgot a step in preparing the environment" and "It did not occur to me that I would need to verify on system level that they are fine") and agree on correcting them.

Two days later, I check again. It has not been corrected. So I ask where we are, to hear that we are ready, which we are not. Containing the mild steam coming out of my ears, I ask if they checked the list in which they could see things are fine from a system perspective and I get explanations ("I don't have access", "I did not know I have access").

Another day passes by and I realize there's a holiday season coming up, so I check again. They are not fine, but "they should be". I ask for a list of the computers, to learn there isn't one. And I spend a few days tracking the relationship of the IP (given by DHCP, changes over time) as the only info given, matching them to image names and actual statuses of getting things to work.

The assignment was made in intent. No clarifying questions were asked. Given solutions, instructions were being dismissed. Learning did not show up in the process with repeating patterns. And finally, there was no consideration for the handoff that was evident for the planned vacation.

This is the different between a trainee and a senior. Seniors see things and track things others don't.

Today I'm enjoying the results of this prep, finding some really cool bugs having guessed some places where it would be likely to break. Having these issues now and having them soon vanish, knowing that my mention of them here is all I have to show is deeply satisfying.


Wednesday, February 21, 2018

Conferences as a change tool

European Testing Conference 2018 is now behind us, except for the closing activities. And it was the best of three we’ve done so far. We closed the 2018 edition saying thank you and guiding people forward. Forward in this case was a call for action to look into TestBash Netherlands, which is in just two months in Utrecht. I will personally attend as a speaker, and having been to various TestBashes, I’m excited about the opportunity to share and learn with fellow test enthusiasts. 

This promotion of other conferences is yet another thing where we are different. We don’t promote the other conferences because they ask us to. We don’t promote them because they pay us to. We promote them because we’ve learned something before we started organizing our own conference: we all do better when we all do better. 

In TestBash New York, Helena Jeret-Mae delivered a brilliant talk about career growth, with one powerful and sticky message: in her career, while she stayed in the office and focused on excellence at work, nothing special happened. But when she went for conferences, met people and networked, things started happening. She summed it up as “Nothing happens when nothing happens”. There’s side effects to growing yourself in conferences, learning and networking, that create a network impact of making a change relevant in advancing your personal career. This resonated. 

At European Testing Conference 2018, there was a group of people in different roles in a company. There was the manager, and there was the person the manager would manage. Telling the person to do something differently had not resulted in a change. Sending the person to a place where people enthusiastically talked about doing the thing differently made the person come to manager with a great idea: there’s this thing I’m not doing now, I want to do more of it. Ownership shifted. A change started. The threshold of thinking “all cool kids are doing this” was exceeded. Power of the crowds made a difference. 

While we would love to see you at European Testing Conference 2019, the software industry is growing at a pace where we are realistically seeing that the need of awesome testing and programming education (tech excellence) is needed. We need to grow as professional ourselves, but also make sure our colleagues get to grow. We all do better when we all do better. We suggest you find a local meetup, learn and network. Go to any conferences, go to great conferences. Go and be inspired. The talks can give you nudges with ideas, the skills you acquire by practice. Sample over time, always look for new ideas. 

The short list of conferences I pay attention to mentioning are ones I recognize for being inclusive and welcoming, and treating the speakers right (paying their expenses and including new voices amongst seasoned ones). I want to share my love and appreciation for TestBashes (all of them, they are all around), Agile Testing Days (in USA and Germany), CAST (USA, since latest editions)  and Nordic Testing Days. The last one is a fairly recent addition to my list now that they’ve grown into a solid success that can treat the speakers right. 

I enjoy most of the conferences I’ve been to, and would recommend you to go to any of them. I have a list of my speaking engagements in http://maaretp.com and the places I’ve experienced is growing. 

What’s the conference you will be at this year? Make sure there is one. Nothing happens when nothing happens. Make things happen for you. 

Introducing intentional vs. accidental bugs

There was an observation I made on the psychology of how people function in a previous job that has kept my mind occupied over time.

When I joined, our software was infested with bugs, but the users of the product were generally satisfied and happy. It appeared as if the existence of bugs wasn’t an issue externally. It was an issue internally - we could not make any sense of schedules for new features, because the developers got continuously interrupted with firefighting for a bug fix.

Over time working together, we changed the narrative. The number of bugs went down. The developers could focus better on new features. But the customer experience with our product suffered. They were not as happy with us as they had been. And this was puzzling. 
Looking into the dynamic, we grew to understand that for the product, there was a product and a service component. And while the product was great, the service was awesome. And there was less of the service when there was no “natural” flow of customer contacts. If a customer called in to report a problem and we delivered a fix in 30 minutes, they were just happy. Much happier than without the need they had for our service. 

This past experience came about as we were organizing European Testing Conference 2018. Simon Peter Schrijver (@simonsaysnomore) was awesome as a local contact, focusing on a good experience for the venue in making things clear and planned in advance. As a result, things flowed more smoothly than ever before. There were “changes” as we went on setting up sessions on when we’re reorganize rooms, and those changes required the conference venue to accommodate some unscheduled work. While we felt we can do this ourselves, this venue had a superb standard of service (highly recommending the Amsterdam Arena conference center) and would not leave us without their support. 

Interestingly, some of us organizers felt less needed and necessary when there was less of firefighting, bringing back the memories of the service component from the past. Would there be a way of knowing if people were happier with our past years quick problems resolution (we were on it, so promptly) or this years feel of everything just flowing? Whose perception of quality mattered? Interestingly, in retrospect we identified one problem that we had both earlier years and this year. Earlier years we fixed it on the fly. This year we did not fix it, even we should have had equal (if not better) financial resources to act on it. I personally experienced the problem with the microphones, and failed to realize that I had the power to fix it. I can speculate on the feeling of “executing a plan” vs. “emergent excellence”, but I can’t know the real causes to the effects. 

This brings me to the interesting question of introducing intentional vs. accidental bugs. If the problems while they exist make things better as long as we can react on them quickly, would moving from accidental to intentional be a good move? Here the idea of opportunity cost comes to play: are there any other, less risky ways to focus on the pull of service, than creating the need of the service with bugs? 

With the software product, we needed to invest more in sales and customer contacts to compensate for the natural pull of the bugs for the customer to be in contact and nurture our mutual relationship. Meeting people on content can happen ore in a conference with less issues to resolve. Did we take full advantage of the new opportunity? Not this time. But we can learn for future. 

Friday, February 16, 2018

Things the Frog Did Not Notice Before It Was Late


Looking at my quarter of a decade in the software testing industry, I can look back at what I am now and what I've been earlier, to realize I've gone through some major learning experiences. Having gone through those learnings, they all now seem evident and obvious. But I've made myself a favor over the years, clarifying my stances in writing, allowing myself to see how my views change. At first I worried about sharing anything I thought was true because none of it might be, but writing more helped me deal with the concern and just embrace the positive. 

Many of the foundational changes are things I did not see coming before, they sunk in slowly over longer periods of time. They are foundational in hindsight, and could easily be things I thought I always knew. Here's four that I think are ones that caused my whole belief system to pivot to find new possibilities. 

1. Test Cases aren't What Good Testing is About

Early in my career, I taught testing at university. The course had 120 students a year while it was part of a major, and I reached a substantial amount of local young minds. Part of the course was four-phase hands-on lab where the students would write a test plan, test cases, execute test cases and report their testing as well as automate a subset of their tests. 

A few years passed, and some of my students taught me my first foundational lesson. We met in a real world testing project, where I guided them into exploratory testing and cautioned against premature documentation of test cases at times when we know the least, and opportunity cost of the documentation work in relation to time we could spend actually testing the products. My students reminded me of my teachings early on with words I've come to cherish: "Great to hear you're doing this in a smart way, not the way you taught us it must be done at the university course". 

I used to believe test cases were what good testing is about. But good testing is about finding relevant information with a limited budget, with consideration of best for today and for the future. Test cases have little to do with best for either time. 

2. Continuous Integration and Delivery Wins Over Change Management

I grew up with testing in mostly waterfall projects. By the time we were testing, many moons had passed without us seeing the software, hearing only rumors through requirements documents. We tested in phases, with a huge scope and a fixed schedule. And when it finally reached us, we needed to be careful with change management. Every fix would take us back to testing again, and we only wanted fixes that were absolutely mandatory. And we did not want them to come to us whenever they were available, after all we were approving a build, not making sure that the end system was the best possible for the users by enabling change. Because change was a risk, that usually realized as something even worse.

I remember the person who first tried to talk me into the idea that continuous integration was a good thing, and how fiercely I resisted. Later, living through continuous integration and delivery, I can't imagine wanting to control change in the way we used to. Small changes with small impacts, and lots of them over time makes life much simpler. 

3. Test Managers Can Make Testing Worse

With a some years into testing, someone decided I could lead testing efforts and made me a test manager. I created strategies and plans, discussed with the testers I was working with on how we'd follow through those plans and coached people into being better testers. I sat through meetings, building a great holistic picture of what was expected of us. And I tested some, usually something less time critical because my attention could be taken elsewhere. Then agile hit us. External managers trying to manage small self-organized teams did not make so much sense anymore.

I stepped down from my "career path" and became a tester again. The testers I used to manage became better because they did not leave "my work" for me to do, but found better ways of doing it all. They became better testers when part of their work wasn't expected from "manager". When I knew things others didn't, I could contribute just as much if not more as a colleague. 

4. Test Automation is a Core Part of Testing

I've spent years honing the thinking part of testing. I've learned to work with software and hear what it tells me, combining all sorts of information while using the software to test as my external imagination. The thinking part and the manual execution part supported each other, and automation in testing was something that helped me reach things I wouldn't be able to do manually. But a lot of the automation was throwaway code. I had colleagues with a different focus in testing, creating test automation scripts that could run reliably over time, detecting unwanted changes. And I thought of those as separate things - the artifact creation and the performance.

Then I learned to do the part I used to look at others doing, and it changed the way I looked at it. It made me realize my previous company would have been better off investing three years into me if they got both the great exploratory testing results and a piece of automation that documents, in an executable format, some of my lessons that the team could use to hold their stuff together. 

I realized the only reason for me to hire someone who does not do both exploratory testing and test automation and intertwine them is that people have not yet learned the other. And we have lots of test automation specialists who are bad at testing, we even have lots of test automation specialists who are bad at coding. But they leave behind, in long term, something that could help when they are not around. Those who don't automate make their impact in the quality as we see it NOW.


There's the story of a frog not noticing when it is boiling, moving to a different purpose as food. The frog story might be a fable without a foundation in empiricism, but as fable it describes the feeling of how things change. Many of the things that changed my views are like that. I did not notice them while I was in middle of the process. But where I started and where I ended up are very different states. 

Saturday, February 10, 2018

Test Automation Legacy Code

10 years ago I left an organization, that was top-notch in exploratory testing but had no test automation. With exploratory testing alone, I helped introduce the foundation for what would become more aimed for continuous delivery, introducing continuously delivering (with a lot of manual steps) to a beta program. The technology preview concepts and ideas still are easily recognizable, a decade later, without memories of history: it's just a way things are and have been.

While I was away, test automation got a foothold. Looking at it in hindsight, I'm happy I wasn't there to mess it up: great testers favoring the thinking part of testing and speaking up a lot about it are one of the most relevant blockers useful automation has, stopping automation being born while it's still learning its place and form. Lesson learned: give room for things to grow you don't believe in, and they may grow into things you do believe in.

Now that I'm back, and I look at the test automation generated and feel joy on the accomplishment of introducing that there. I did not do it. Or maybe I did, by stepping away and leaving the battle of opinions unbalanced, for the automation side to win. But it is there, it is doing real testing and while it has many many problems, it is a cornerstone of the way we build and release products.

In the 10 years, I've changed. I've come to remember that I was 12 when I wrote my first program. I've come to appreciate internal code quality, and recognize when its lacking. I've stopped looking at testing testers do, and started to look at programming productivity to produce the right quality.  I've trained with people identifying as a legacy code expert, and re-learned programming legacy code first, test-driven development second and always driven by hands-on work over reading about it.

This week brought me new appreciation in the role of legacy code in what I do now for our test automation system. I'm helping us clean up the mess, without removing the value so that we can add more value. I draw from lessons on legacy code, lessons on (test) product ownership, and intertwine those so that the automation we have would better serve a product line.

I look at this as lifecycle. There's someone to select (or create) the framework. There's someone to use that framework, adding tests to the best abilities they have, doing real useful work. And still, there's the time when the code running the test automation is legacy, still living and breathing, and needing attention to not block us from our future enhancement aspirations.

We're inclined towards a rewrite, while refactor is a better option. When the existing structure emerges from the mess of duplicated details, changing pieces becomes timely. Mending the systems, not making them.


Friday, February 9, 2018

The War of Ownership


Agency. It's the fancy new word introduced to coin an insight in a war I don't want to be fighting. The idea that it's referred to as war is the first hint that what this says has little to do with the collaborative non-violent software development we aspire to.

This is a war to say that in the kinder, collaborative way, testers don't feel safe to believe their existence is founded. They're struggling for their life as they know it. I don't feel like I want to join that war, I want the war to stop. And the way to stop war isn't my specialty, but I suspect it has something to do with finding options and making working agreements. And when the party in war isn't willing to make any agreements, the stronger wins. Newsflash: tester profession isn't winning in this war. It is taking steps further into alienating itself from the tables where decisions of future of software are made.

I'm selecting a few points on twitter to emphasize my takes, words by Michael Bolton.
"Testing doesn't make your code better. Testing doesn't make your code testable either. YOU make your code better, and YOU make it more testable, and those are fine things.
Testing isn't an abstract thing that happens. There's someone who does it. And there's two clear choices of who that someone might be in the jobs we have in the industry. It could be a tester. It could be a programmer.

The programmer is YOU in the clipping above. The programmer makes the code better., the programmer makes it more testable. And the programmer tests. The programmer makes makes the code better. And in my experience, programmers who don't test rarely create good software.

There's the other option of why it might be. The tester. It is in the tester's interest to create a clear line and separation. But the trend is to remove that clear line. Many organizations report great results blurring the lines. They aren't making everyone the same, but they are stopping man-made absolutes of lines between who does and what. They are saying everyone does what their skills allow them. Everyone learns. And that everyone learns also about testing and ways to build software that makes users awesome.
"...make explicit a central theme of our Rapid Software Testing classes and consulting work: agency. We want to help empower people; shine light on what they do; help to liberate them."
What I read in this statement is that people = testers. Testers that fit the Rapid Software Testing methodology requirements. I've been told in the past I'm not a tester (as per the RST terms at least). Yet, that is exactly the position I hold, and have been holding, hands on working with products for the last 25 years.

Empowering people by creating a clear distinction on roles that are job functions and don't need as much clear distinction in the world of collaboration would include allowing them to see world their way, and mediating. But that is not the world RST seems to serve. It serves the world that I don't see as a practitioner in the companies I work with.

I recognize I'm selective. I choose product companies. I choose agile methodologies. I choose ones that believe in empowering and listening to all their experts.
"I don't think there's enough salt"
I can notice lack of salt, and add it. I don't remove myself as an actor when the salt needs adding if (and when) I know what is the appropriate amount. I don't need to remove myself as an actor on fixing as someone hired as a tester.

When we talk of the concerns about limited time and choices that we make on splitting to roles to ensure different concerns get covered, we are using concepts from time before continuous delivery. The world has changed. Quoting Necromancer from memory: "The future is already here. It's just not evenly distributed.". We don't need and can't have one true way anymore.

Software development is a process of transforming ideas into code. Which of the ideas are labeled what isn't as relevant as we think they are for reasons other than having the profession we love. What could be the ways to add meaning to this conversation that is stuck on violence and war? Isn't there a more constructive way to build a profession that draw the lines around testing for the purpose of understanding the tester?


Note added later: I did not need to read JBs article to pick up words from the title. This is not a response to his article. This is a response to the tone MB runs on twitter. 

Thursday, February 8, 2018

It's just semantics

I work with product development, building and testing a product. The product is a Software-as-a-Service type product extending beyond the idea of renting an app from the cloud. Some parts of the product change as much as 20 times a day introducing new functionality to provide the service the product provides. When I test, I don't test only the software components, but the whole customer journey and experience dealing with our product. And with some millions of customers, long-term commitment with them, striving for better for them is a fun area to work with. There's no projects. There's the product that lives on. 

So I wrote a piece of my mind talking about test automation as a product. It too has users, long-term commitment with them and is intertwined in appropriate ways with the way we develop. And an ex-colleague decides to comment on twitter:


My first reaction is to to say "it's just semantics" - "wordplay". Semantics is meaning of words, and surely meaning of words matter? In this case, I don't care of the difference between "product" and "ecosystem". I don't care for the focus in a single word, when I've just used many to explain a lot more than just that word. 

To say "it's just semantics" is to say that in this conversation, I'm done. The way you approach the discussion with me just turned sour, and I'm  not committed in continuing. You're derailing me. 

I read a wonderful book called Crucial Conversations, that talks about these types of dynamics in conversations that matter. And conversations around the nature of testing matter a lot to us testers. The book introduces the ideas of two ways of closing the flow of meaning to a pool: violence and silence. Correcting words is a form of violence. My default reaction is silence, keeping the violence option of "it's just semantics" hidden in the back of my mind. 

As we would want to add meaning to the pool when discussing, closing communication isn't a good thing. We can choose to stop and ponder on our reactions, and work back towards a place of trust. We can learn more, add more meaning to the pool, if we just keep at it.

I know Valera as an ex-colleague I have utmost respect for, and explaining myself other possible meanings of his corrective statement isn't hard. He means well, just playing on my triggers. I've needed the same reminder for myself on good intentions a lot with men who explain things to me, without me knowing them or them knowing me. 




Counting test cases

Confession: I count test cases. Before you get all riled up, read further. I count test cases for the purpose of understanding how much of something there is. A typical example for my counting is "30 test cases in our test automation" for understanding how many conceptual program pieces there are or "100 lines of functionality added, yet number of unit tests stays the same". Counting things is useful, but it is not all there is.

On the other hand, it's been at least 8 years since I last counted test cases as in understanding how many there are in a manual test set, how many of them have been passed and how many failed, and how many yet to be discovered for our list through exploring. To be more precise, it's been 8  years since anyone managed to coerce me to write down a test case, or guide anyone close to me to write them down. Instead, I write test cases into automation and free up majority of my time in freeform exploratory testing. 

It is also 6 years since I last did session based test management, counting sessions or time in functional areas as a means of progress. And even then, I did it for two weeks to prove a point: I was worth trusting to do good testing without paying extra in time to impose a visibility framework of this sort.

These became irrelevant to me, as I helped my teams move to continuous delivery. When we manage scope of hours or days instead of weeks or months, the numbers no longer matter. Quality of testing we do matters. And we learn about that as we deliver continuously, carefully tuning so that our customers could forget we ever updated their software. 

I started this post with an idea of examining my views on counting test cases, if I was asked to do them again. With all the experience I have, would I? When would I? And is there anything I would advise to those who still do?

Finding the Least Amount of Meaning

A core principle in testing is one coined by Dijkstra quite a while ago: we can't prove absence of bugs, just show the presence of those. So even if a million test cases passed, the tests that are worthwhile are the ones failing.

Twitter brings me a haha moment:
The image with texts coins the least amount of meaning in counting test cases: counting the ones passed. Counting the ones that did not find bugs. Counting only the ones that pass. Forgetting that each change for the bugs found invalidates the ones already passed introducing a new test target.

Adding More Meaning

Thinking back 8 years for the time I last counted test cases, I remember a futile battle turning into a productive negotiation. I started off with the premise that the way things had always been done - counting passes and failed tests - was a way to take us to bad testing and bad relationship with management. I was faced with the fact that with a 30-days acceptance testing project after multi-year delivery project, no one was comfortable without a way to see how testing was progressing. I couldn't go full on session-based test management. I find that it was a poor choice to replace what was in place with the amount of work needed to ramp up skills of business specialist in the methodology.

I approached the problem at hand with experimentation. Experiments are a way of asking to try something different, just this time without commitment to doing it again because it may go bad too. We started off where the organization was before me: writing test cases in advance, and following pass/fail numbers throughout the 30 days.

In the first 30-day acceptance testing, I started stretching with what I perceived as the biggest risk of using test cases as measure in a traditional way: quality of testing that gets performed. With pre-designed test cases, you create the ideas of what to test when you know the least. You have no software in your hands just the promiseware of requirements. The way test cases were created was looking at an old version of the product, imagining how the promises change that and making those scenarios that walk us through to see the changes in action.

With my lead, we introduced two kinds of test cases. The first batch was just like it had always been. Details of where to go, what to look for. The second batch was different. We used HP toolset to create  template test case, an idea of reusable steps for test cases. The template test case steps were a high level of the process the system was supporting us through, no details. The actual test cases were test data: people whose data we could use to walk through the process, in different ways. We split the time available so that we first tested with the traditional type of tests for half the time available, and the other half was left for what was essentially exploratory testing.

All the bugs we found - and we did find quite a few - were found with the latter type of tests. We learned the mix was really good for us at that point of time. Jumping directly to freedom would have made people nervous. Mix of the old and the new allowed us to do great, stretching people not too far away from their current skills and comfort zones. We reported tests planned, passed, failed, and started-yet-not-finished across both types of test cases.

In the second 30-day acceptance testing I lead for a different product, we stretched further into exploratory testing. The system we were testing had a complex processing logic with one step reaching to a third party system including manual processing. We again created test data as test cases and template test cases as reusable steps, and step 7 in the 12 step process was the information the 3rd party system needed to pass us. The group testing was seasoned in the business process, and had never used test cases before and this was a perfect fit for them.

Results in what testing found before going live were equally great. The test numbers showed us that a big portion of tests were in started-yet-not-finished state, and helped us encourage the 3rd party system in tracking whether our requests of info arrived on both ends.

The third 30-day acceptance testing I lead experimented in the secondary risk of using test cases as measure of progress: conveying the nature of testing as activity. In the first two efforts, I was aware of the illusion tests marked passed or failed were creating. As we found a problem, a new version of the system was introduced. When we found a critical cross-system change-introducing bug when 80% of tests were passed, the remaining 20% wasn't really enough. The idea of the metric was not only founded on guidance that lowered the quality of testing that could happen, but also encouraged lying on the coverage assuming there was no change.

We still used test case counts, but we changed our graphs and communication to a metaphor of a Progress Bar. We all know how progress bars are. Time waited for something to update and the number shown on the screen have often some connection, but it is not predictable and reliable. It's something to just say 'hold on, wait, be patient - working on something'.  With the progress bar, we introduced a 30% "invisible tests" number, showing the allocation we expected for repeating tests or introducing tests while testing. By the time we were at old 100% of tests passed, we really needed the extra 30% to run tests again for change and we avoided the old stupid ways of non-testing managers deciding that we were done when things planned were done once.

Why Would a Project Need Test Case Counts?

I'm not for test case counts. However, when I have to deal with them, I've learned the core of playing them for the goals of doing a good job testing:
  • Free the "test case". It's just any placeholder of things to do. It could be an exploratory testing charter. They don't need to be the same size. Trying to make them same size is just foolish. 
  • Communicate 'best before' idea for results. A test passed today can be not executed tomorrow. And how quickly the 'best before' date hits you, depends a lot on the organization.  
The projects need test case counts if they have no other measure of progress and are not ready to place trust in getting a spoken reliable measure of progress without a forced test case counting methodology. 

When I started looking at testing as time investment and reporting against time, things got more straightforward for me. Given a week, I can always say that on 4 days used, I have only one left. While exploring, I can explain what I've discovered in that time, and what I would use the next week on. I can do that, teams of exploratory testers can do that, but not all business specialists temporarily assigned as testers can do that. 

I know counting test cases is meaningless. I know the same test case done early on can take more time because I can't stop myself from exploring around whatever I was given. I know the same test case done again later can find a problem that was there in the first place, but I was just not in a state with my learning that enabled me to see it.   Constraining on test cases when the process is about learning makes absolutely no sense.

But I accept that sometimes I have to do things that make little sense to me, because they help others. I also know that I can experiment and offer alternatives that slowly take people towards where I am with understanding the dynamics around testing. Sometimes, asking people to trust me on my perceptions of status is enough. I've learned to be away enough to build ways of working that don't crumble on my absence. 

A great option to take people towards is more frequent deliveries. When meeting an organization that counts test cases, that is now my default change I would go about introducing. 



Wednesday, February 7, 2018

Driving test automation forward as a product

I'm in a middle of a very complicated relationship, best defined by love-hate. On some aspects of it, I just LOVE what we've done. Yet on other aspects, I HATE where we are. It feels both a little schizophrenic and balanced. And I'm talking about the test automation I work with.

I work with it by being on the sidelines. I know I can step in whenever I feel like it, but no one requires me to. I can look at it both as an insider and an outsider. My place and position is unique. I find that I see things others don't pay attention, and my attention brings out things others wouldn't be paying attention otherwise. And I share about this position for you, my dear reader, because there's something you could consider here:

  • if you are deep into automation, what a step back can give you as perspective
  • if you are not deep into automation, what you can make sense of just by seeing concepts and reading code "as if it was English"
I'm working out my relationship with test automation because I'm no longer ok with test automation doing a bad job at testing or myself being a blocker for others by focusing on what it cannot do over what it can do. 

There's things that I love, and where other people's appreciation helps me appreciate things more. 
  • Our ability to run automation that kicks off 14 000 clean OS instances up and down a day is quite an achievement, and that from "I want a clean OS to install on" to "I can start installing", it is a matter of a few seconds. 
  • When a new person joins and isn't left to discover the environment on their own, it takes a day to get started. Comparing this to new person joining discovering it on their own being weeks, basic proficiency being closer to 6 months I'm even a keener fan of pairing new hires for their first tasks. 
  • It runs and it is kept running. It enables releasing in a way products of this complexity could not be released without it.

    There's things that I hate, that others seem to hate much less.
    • It guides new hires to create a corner of their own over sharing common assets
    • It has tons of embedded decisions over time that allows others to be judgmental about "not doing things right" for later hires
    • Reuse of things has a manual coding element, taking days of coding to just introduce a concept like "same tests to another environment".  And people rather spend the days on the manual task than create an abstraction. 
    • People think of it as "testing a lot" because it runs often even if for a very limited set of things to test. It distorts *managers* concepts of how well we've tested, when same thing 1000 times is not 1000 times more testing for real.
    So when I said I will reframe myself as an architect, I find I reframe myself first as test automation architect. I choose to work on things that drive the overall structures for the better. And just expressing things I would like to see us work on brings me to an interesting place of shining a light on things that have been that way. 

    Since I don't still end up dwelling in the code and implementation details all my days, I see concepts. I see that there's tests that are small (that I want more of) and tests that are large (that I want less of) - and I see that the structure does not help me see them. I see tests, test specific methods and common methods, that again the structure does not help me see them. I see products, applications and components, and that again the structure does not help me see them. I see similar use of resources, like having malware samples, temporary data and persistent data, and I see that the use of those isn't consistent.

    I'm in a place where I have the vision of where we might head to for good or better, with limited ability to implement it all by myself. I might be paralyzed by my abilities alone, but others with different abilities may be paralyzed by not seeing things I see, or requiring things I require. In the last three years, I've acquired a superpower that allows me to still do much about this: pairing and mobbing.  That superpower, in addition to making it possible to turn my great ideas into code, gives us all a chance of learning together. And I'm looking forward to it.

    Test automation is a product that tests our other products. Caring for overall quality of it is just as necessary as caring for the details of each test. 

    Tuesday, February 6, 2018

    Security, Testing and My Place in All of This

    Where I work, we have this constant struggle of us (the good guys - cyber security specialists) and them (the bad guys - in their various forms). The struggle is kind of fascinating to look at from a tester perspective.

    We know software always has bugs. The good ones of us are good at catching them, and catching some of the relevant ones before we ever deliver our software out. And we catch some of them after release, listing carefully to signals and confirming suspicions. As a tester, I've come to the ultimate conclusion that what matters is our ability to change. It matters because we will miss something, regardless of our best (and improving) efforts. But it matters more because we have an adversary that plays the game from a different angle.

    The people who create malware are software developers, just like us in many ways. I find it fascinating to think if they have the same needs of investing in testing. Any software introducing the right kind of bugs can give an attacker a route into a system they shouldn't be in. Does it matter if the software using an exploit is fine-tuned and tested?

    I had a chance to talk with an old tester colleague of mine who now uses his analytical skills in working with security after something bad happens. Something bad these days could be that a company has ended up with ransomware, lots of critical data unaccessible as it has been encrypted. You could try restoring from backups, but what if your incremental backups don't sum up to full? Apparently you can also use your testing skills, testing the malware and finding bugs in it to be able to open the encrypted files again. The joy on my colleagues face on finding the bug and using it to open up all the files was the same joy we feel when we catch relevant bugs in our software. The skills used are same or similar, the target of testing is just completely different.

    Working in a security company makes me wonder about the role of testing. A lot of the bugs I routinely handle have a lot to do with lost time, lost services, inability to do what needs doing. But some bugs, the security ones, have to do with lost access and lost data.

    We're starting to see the value of security as data is crucial. I wonder if we will ever see the value of people's time, or if other solutions to free up time over delivering well functioning (tested and fixed) software will win out.

    We live in interesting times. And today I stop to appreciate that what I do for *this* software, I could do for any software. 

    Friday, February 2, 2018

    Reading with rose-colored glasses

    As a tester, I specialize in feedback. I both find things that no one else was bringing to the table, and amplify things someone else did so that the feedback gets the attention it needs. One of my favorite sayings is from Cem Kaner's book Lessons Learned in Software Testing.


    I got to think about this today, as I had a pair of people to give a piece of positive feedback on. 

    I approached the first, using a phrase of "waking up a security bear" to emphasize that something they did resulted in a positive outcome of identifying and addressing a vulnerability. The positive feedback was taken at face value. 

    I approach the second, explaining a little more context of why this was important. And while I thought I was still trying to say "well done", I got into a spiral discussion of what was wrong with what they did. Reflecting the interaction, there was *one thing wrong* - the immediate response that the bug had been previously reported and dismissed then. 

    The first one approached the feedback as something positive. The second approached the feedback as something negative. I ended up with two completely different interactions for the same message: job well done, I would like to turn up the good and this was good. 

    The whole experience took me back to a one-liner from my boss: "I want to talk with you". My immediate response was "bad or good". I chose to wear my rose-colored glasses and assume positive intent. 

    Putting these two things together, I realize that wearing the glasses of good intent is the single most relevant thing I have done to feel happier and more successful as a tester. The world is filled with good, and even the (negative) feedback we are bearers of is positive from a constructive angle.