Setting stretch goals

In addition to your everyday project related work, it’s a good idea to work on something “extra”.  If your “extra” work benefits you, as well as the organization you work for, it’s a “win-win”.  Depending on your manager you could do this work during the work day or after work.  If you feel you are short of time, you can try to have a working lunch.  There
were some great suggestions on finding time in a busy schedule on the Software Testing Club forum.

When creating goals, it is critical to try to find subjects which really interest you.  This is the most important factor which determines how well you achieve your goal.  A lower priority is to find topics which will help your career (Ideally this matches your previous objective).  As I mentioned, if the topics you choose have synergy with your work, that is a bonus.  (It’s important for you to think through these three dimensions – your interests, your career and the goals of the organization you work for.)  I knew an engineer who was interested in magic.  I shared some resources which show the similarities between magic and software testing.  Another engineer was interested in sales.  I suggested that he understand some of our competitors and spend time improving presentation skills.

It’s a good idea to write your goals and discuss them with your manager.  You are not looking for ‘brownie points’.  What you are looking for is feedback on the goals.  Your manager can also make sure the timelines are realistic.  He can periodically check if you are on track and help you balance the “extra” with your project work.  (If you are a test manager you could incorporate these goals in a quarterly/annual planning.)

If creating such goals is not part of your company culture, feel free to talk to your manager and tell him you are looking for his support.

Here is an example of a stretch goal for a tester with around five years experience.  This is for a very technical tester, who is good at programming.

  • Identify the thought leaders in mobile testing (January 20)
  • Identify papers/presentations/videos which provide insight into mobile testing (ongoing)
  • Teach the rest of the team the basics of a mobile app.  Do this by using IBM’s tutorials on Worklight and getting the team to create an app on their own and deploy it on a mobile phone.  Introduce them to the core technologies for creating mobile apps. (January 30)
  • Identify tools that can be used for automation and performance testing. (January 30)
  • Identify tools/technologies which can be used for quick scripting on mobile apps (January 30)
  • Present some of the key failure points for mobile apps and allow the team to provide feedback. (January 30)
  • Refine the list of failure points. (March 1)
  • Depending on your interest you can choose one of the following to focus on in the long term:
    • helping the team understand the technologies/tools used to create apps
    • test automation
    • performance testing
    • defect catalogs

Note that this includes activities which will benefit other team members.  In addition to helping others, this is also a good opportunity for improving skills in presenting, communication and communicating your ideas to others.

Most organizations would be more than happy if their testers set goals like this.  I don’t think any manager would be reluctant to allow their testers set some time aside to work on such goals.

Setting such goals allow you to stand out and be recognized.  At the same time, they help you progress in your career.

There are no ‘requirements’ in agile!!

It surprises me to hear staunch agile followers still using the word “requirement”.  When questioned, they will respond, “…a story is a requirement.”  It isn’t.  Early in my reading of agile I came across this analysis of the word “requirements”:

“Software development has been steered wrong by the word “requirement”, defined in the dictionary as “something mandatory or obligatory.”  The word carries a connotation of absolutism and permanence, inhibitors to embracing change.  And the word “requirements” is just plain wrong.  Out of one thousand pages of “requirements”, if you deploy a system with the right 20% or 10% or even 5%, you will likely realize all of the business benefit envisioned for the whole system.  So what were the other 80%?  Not “requirements”; they weren’t really mandatory or obligatory.”

(I am leaving out the reference to this quote so there is no bias.  If you don’t recognize the quote or the style of writing it is worth your time trying to find the source.)

In agile, instead of requirements you have user stories.  It takes quite some time to make the transition from requirements to stories.  To me, after reading the previous quote, Mike Cohn’s statement about stories was essential: “A story is a reminder to have a conversation.”  Of course it is difficult to create stories.  Mike Cohn has written a book about that.

Alas, “requirements” are even more entrenched in the software testing world.  Interestingly, in 1999, a great tester wrote a wonderful article about requirements.  In that he wrote:

There are at least four alleged truisms about testing a product against requirements. Most  of the testing textbooks on my bookshelf promote these principles, and each principle reflects some truth about the dynamics of testing.

  1. Without stated requirements, no testing is possible.
  2. A software product must satisfy its stated requirements.
  3. All test cases should be traceable to one or more stated requirements, and vice  versa.
  4. Requirements must be stated in testable terms.

When we think in terms of risk, however, I believe a richer set of ideas emerges.

(I am again leaving out the source to avoid bias.  You can easily search for the pdf article. )

If you haven’t realized it by now, keep in mind there are no “requirements” in waterfall!!

In the “waterfall” world (pre-agile), there has been some good work on “requirements”.  Making the transition from that world to a world with “user stories” leaves many questions open.  I still believe, with a good agile coach, and a good testing coach, teams are better off making that transition.  However, writing stories and trying to determine what might go wrong can require a lot of work.

Senior test roles

Here are my expectations of people in senior test roles.  In many cases people in senior test role seem to spend a lot of time with administrative tasks.  I think that is a mistake.  Let me know what you think. (Note: These are my opinions and may not represent those of my employer.)

Test Lead role

 Expectations

  • The test lead must have an understanding of the philosophical basis for testing, e.g., – what is quality, when are we done, etc.
  • The lead must have a good understanding of testing.
  • He must regularly transfer this knowledge to the test team.
  • For organizations involved in software product development, the lead must provide technical leadership to the test team (read Research).  The lead must play a role which is characteristic of a R&D/software lab.
  • The test lead must keep abreast of the current thinking in testing.
  • Leads should have decent computer science knowledge.  They should understand processes such as continuous integration, unit testing, TDD.  They should have good skills in scripting.
  • She must know the state of testing in IBM. She must be aware of various tools used for testing/process management in IBM. (This will be different for other organizations)

There are at least three areas which have strong potential for technical leadership – security testing, performance, test automation. It is likely that the test lead will excel in one of these in the long term. He can choose another area if he wishes.

 Test lead duties

80% time

  • Is ultimately responsible (along with the test manager) for the quality of the testing
  • must review testing done by the test team. In an agile context if there are multiple teams the test lead is responsible for the testing of all features. This may require him to attend multiple scrum meetings every day.
  • must mentor test engineers until they can work independently.

Remaining time (20%)

  • Liase with other groups – development leads, release manager, ID
  • Review customer issues to improve testing
  • Release management along with release manager

 Test Architect

The test architect role may be shared between projects. The architect is also not responsible for specific project deliverables. The expectations include everything stated above as well as the following:

  •  Must have led multiple release cycles for IBM or related products (enterprise security) (This will depend on the organization)
  • Must have a strong understanding of various security related technologies, e.g., CISSP equivalent
  • Must understand technologies such as web servers, application servers, J2EE
  • Must have strong understanding of scripting/programming/computer architecture
  • Must have rigorously studied customer issues for a few years on the products that he has worked on and released.
  • Must have demonstrated understanding of requirements gathering and testing of requirements
  • Must have detailed knowledge of testing
  • Must have significant understanding of quality

What is software testing?

This is my answer when the person asking comes in from the cold.

Take a look at Giridhar Vijayaraghavan’s thesis ‘A taxonomy of E-commerce risks and failures’.  Here is a mind map which shows the component failures and another which shows the qualitative risks (Both mind maps can be downloaded and viewed using Xmind).  This is the original thesis.  Vijayaraghavan analyzes the different ways in which an e-commerce shopping cart can fail.  The mind maps can give you a quick overview of the various failures.  Note that the original thesis contains specific examples of the failures as well as examples of failures which have been published.

When you look at the many ways in which a shopping cart can fail, you realize the impotence of concepts such as “test cases” and automation.  If nothing, stare at the two mindmaps for the shopping cart and read the many different failures to understand what testers need to do.

For similar ideas on failure take a look at Cem Kaner’s defect catalog (Appendix A from Testing Computer Software, Cem Kaner, Jack Falk, Hung Quoc Nguyen).  Here is a mind map and the original list.  You can download the mindmap and expand the branches in Xmind.

Note that both the shopping cart taxonomy and Kaner’s defect catalog should not be used blindly (without context).  They can be used to generate ideas.  However, you need to determine the relative importance for your project.

Of course there is much more to software testing.  Read the The Little Black Book on Test Design for more.

Agile Testing Days 2012 Buzz

Agile Testing Days 2012 is being held in Berlin on November 19th.  Here is my concise summary.  I have included links when available.  Some of my summaries are so concise that you’ll have to stare at the words for a few minutes.  If you get tired of staring, click on the links.

Management Successful automation Transition to agile
Ambler on agile Legacy code? Performance
“Mindful Team Member: Working Like You Knew You Should” Mindmaps Spec by example
Really hands-on no ppt Ambler on agile “5A – assess and adapt agile activities”
Data->quality story Endusers Distributed teams
Understanding agile Distributed teams Line Managers
Exploratory What is A testing Agile->Culture
World of Warcraft Communication Developers exploratory
“Self Coaching” “How to change the world” Factory requirements
Shorter releases Good news CI experience
New techniques Continuous Delivery Test developer
Markus on 21st century tester Advanced CI “Testers Agile Pocketbook”
Continous testing Sapient “Reinventing software quality”
“Fast Feedback Teams” User stories RIA BDD
Test Oracle Slack=creativity Requirements+testing
Javascript TDD BDD Open source tools
Test data Change “Technical Debt”
Bank context driven “The ongoing evolution of testing in agile development” Rigid environment
Right thing right Mobile automation CI
Winning the game

blackhat US 2012 arsenal – security testing tools

blackhat US 2012 has sessions when researches can showcase tools, projects and demos.  Here is my super concise summary of the arsenal.  I have tried to highlight if tools are free/opensource.  Also see my summary of the conference.

 Binary visualization, evolution of hex editor ..cantor.dust..
 Collaboration on Metasploit Armitage
 ARP and DNS poisoning ARPwner
 Amazon Web Services AWS Scout
 Python tool for protocol fuzzing backfuzz
 Burp Suite extensions Burp Extensibility Suite
 Bypass CAPTCHA Bypassing Every CAPTCHA provider with clipcaptcha
 Crowd reverse engineering CrowdRE
 Network simulation to study malware FakeNet
 Fuzzing PHP GDFuzz
 Metasploit NTLM relay (open source) Generic Metasploit NTLM Relayer
 Python scriptable pen testing Gsploit
 Python tool exploit HTACCESS HTExploit bypassing htaccess restrictions
 Java email phishing test social engr defenses ice-hole 0.3 (beta)
 Machine learning to security incidents pre-breach Incident Response Analysis Visualization and Threat Clustering through Genomic Analysis
 Sniff iphones and ipads (open source) iSniff GPS
USB human interface devices in pen tests Powershell scripts for offensive security and post exploitation Kautilya and Nishang
 Volatile memory from Linux/Linux based LiME Forensics 1.1
 Add security features to apps post development MAP
 Better host based incident response MIRV
 Web application fireall (open source) ModSecurity Open Source WAF
 OWASP project for training, experimentation OWASP Broken Web Applications Project
Assess OData Oyedata for OData Assessments
 Python tool to explore PDF peepdf
 PHP eval function phpmap
 Incidence response and investigation (free) Redline
 Registry analysis (free?) Registry Decoder
 SAP GUI network traffic SAP Proxy
 iOS apps Semi-Automated iOS Rapid Assessment
 Opensource smartphone pen testing Smartphone Pentesting Framework
 Search engine hacking (Free) Tenacious Diggity – New Google Hacking Diggity Suite Tools
 Vulnerability aggregation and management (open source) ThreadFix
 Web security scanner (open source) Vega
 Manual and automated approach to web app assessment (ruby, open source) WATOBO – Web Application Toolbox
 Attack XMPP connections XMPPloit
 Mobile IPS zCore IPS