Hand over

Everything a performance tester needs to cover

In a modern enterprise internet company the performance tester needs to cover a large range of topics/requirements/tool sets and approaches. This page will outline as much as possible all the areas to be considered for hand over in one particular, typical, organization. Iíll start with lists and see where we get to with details.

Top level areas:

  1. Front end performance testing and client resource loading patterns
  2. Back end performance and services/caches/databases

Requirements

  1. Front end loading requirements: page complete and 3rd party resources
  2. Back end requirements for services: read/write times under specified loads
  3. SLAs on 3rd party systems

Tool sets

  1. JMeter
  2. Jenkins
  3. Webpagetest
  4. Splunk
  5. Bamboo
  6. (Facilita forecast) deprecated but old scripts may need analysing
  7. HP LoadRunner
  8. General Linux tools and bash scripts
  9. AWS console and APIs
  10. Adobe analytics
  11. Confluence
  12. Jira
  13. Cacti (server monitoring)
  14. Dynatrace
  15. bash (use google for the tiny details! what a painful language to work with day to day...)
  16. Excel. This is limited for large data sets that we typically use but can be useful on occasion.

Monitoring (providing monitors as as a separate perf testing job)

  1. Monitoring of various performance metrics (see the Splunk pages)
  2. Monitoring system usage for SLAs
  3. Monitoring front end page load times

Load Test Data profiles

  1. Production log file analysis
  2. Front end (pre-cache - Adobe Analytics) versus back end (post cache - server log files)
  3. GETs, PUTs, POSTs etc. Not everything is in the log files
  4. You need to liaise with devs/architecture/business around expected traffic on new systems or peaks that are likely to happen in the near future due to known events
  5. Specific business cases can arise due to publicized events - ad campaigns/competitions
  6. Be aware of cache hit ratios and real data selections by users
  7. Calculate request ratios across apps and services and layers

Team communication

  1. You need to be pro-active
  2. You need to liaise with devs, dev-ops, team leads and project managers
  3. You need to have access to book environments, lock services (mocking), monitor systems
  4. You need to specify any data requirements to interested parties
  5. You need to coordinate everyone across the various systems so your testing is not interrupted and you do not break environments or interrupt others
  6. You need access to ramp up databases/build environments to specifications (1,2,3 app servers etc.)
  7. You need to be more pro-active than in step 1. above. You need to coordinate everything you need. Anything missing will ruin your testing
  8. Give people notice and chase up after. Double check anything critical
  9. Email all interested parties before/during/after test runs
  10. DO NOT run any performance tests without everything being signed off and ready. (I have seen junior perf testers bring down system/point to wrong environments and cause havoc)
  11. Wait for confirmation. Chase up directly if it is not made clear in your eyes. Ask again if you feel you need to. And you should be coordinating everyone involved.
  12. Watch out for testing clashes. Is anyone else running a performance test that shares any of your resources? I am lucky just now, covering most projects myself makes this easier to keep an eye on. But I have often in the past found out half way through that someone on the other side of the office is running tests without my knowledge. If testing is split across teams/offices you need a system in place to cater for this.

Reporting

  1. This is critical to get your conclusions across and action any findings
  2. Always provide a management summary that anyone/everyone can understand
  3. Provide details lower down. You must support any claims/findings with clear evidence.
  4. Keep as much analysis data and logging as possible until issues have been fixed
  5. Store key results data sets for future analysis
  6. Cover everything you report on - you may need to justify decisions
  7. Send emails about anything untoward or worthy of note. Even if they are not read they are good to have as a record
  8. Take a note of the application versions you are running against. Later on this (documented) information can be key, to prove something worked before for example, in a particular version.
  9. Go to stand-ups when you can. If you are across projects make sure you keep in touch with the teams if you canít make all the stand-ups
  10. It is a good idea to work with Jira but performance testing doesnít easily fit in the agile model so you may find a better way to work.
  11. One thing I do now is have my own weekly meeting on up and coming performance work across projects. This allows you to plan resources for the week, book environments, think about any DB updates or data files you may need to obtain

Results analysis

  1. Take care. Test tools can be at fault, even the amazing LoadRunner!
  2. If you have any collaborative monitoring (dynatrace/in built app metrics/ post analysis of log files), make full use of it
  3. Use the teams around you. If you do not have access to systems that you need, talk to others, try and get access or get reports emailed out to you
  4. Get analysis reports from DBAs when required
  5. Within LoadRunner watch out for issues caused by data. If you have monitoring plugged in, make full use of it - graph server resources and use the correlation and cross results functionality
  6. Make sure all your scripts are checking correctly for pass/failure
  7. Talk to the devs about expected results and return codes when designing scripts
  8. Make sure your cache hit ratios are correct. Slow results could result from random data selection that just wont be seen in Prod.
  9. If you can, compare with current Prod systems
  10. If there are issues, try and pin them down to specific causes. See ĎFinding Issuesí for some examples
  11. If you are not experienced at this, involve other people from other teams

Tests to run

  1. There are lots of different systems out there and any number of components can cause performance issues
  2. Your job is to catch as many issues as you can before go live.
  3. Always try and run a soak test, the longer the better. This weeds out any longer term issues
  4. Try and run high load soak tests. This is aimed at stressing the app and DB rather than the hardware
  5. Scalability needs testing. Even apps that are specifically designed to be linearly scalable are often not (ask me about Citrix farms!)
  6. And to look for true server capacity you need to creep up on it. Slowly increase the load and run at steady state before deciding if that load can be sustained. Then up the load by 10% and repeat. It takes time but you get proper answers that way.
  7. Double the server/quadruple the server and check with decent length tests - maybe half an hour but maybe 3 hours each, depending on the apps and the data model
  8. Watch out for test tools! You can hit their limits and itís not always obvious. In AWS watch out for pure numbers of injectors. CPU and memory may look fine but AWS may be limiting network bandwidth in the background. More injectors can fix this.
  9. Run tests to different layers if you can to isolate performance bottlenecks
  10. Watch out for caches - real data versus test data. With long running tests it is very difficult to keep your cache hit ratios low enough. And that can skew your results - improving thing for the servers!
  11. Of old I would always do rendezvous tests. This is not needed so much in stateless web sites but bear this mind.
  12. Low level as well as high level tests. A lot of emphasis is put on high load tests, mainly because we are often focussed on app and server stability. However, there is also a place for low level tests, to check application response times under more normal (mid-day) loads, when the caches may not be hit so much.
  13. Of course you also want to run several standard tests. These are particularly useful for regression testing. various 1 or 2 hour tests that can become benchmarks. Results can then be directly compared between application versions etc.
  14. If at all possible run the same tests several times, at different times of day. This is to allow for variations on the networks and servers, with traffic and batch jobs etc. that you may not ne aware of
  15. For front end testing itís really good to run from home (with a local install of webpagetest) so you can get real live networks through a standard ISP.
  16. There is a whole other side to performance testing: that of development support and tuning. In these cases you can work closely with the devs and follow what they need to perfect there apps - usually java memory tuning. You still need to bear in mind data profiles etc but tests may be built with very simple repetitive calls, perhaps one line tests but data is still important for caching reasons. This of course can all be discussed with the dev and models can be designed together. It depends on their specific needs here rather than the wider (final) business use

General points

  1. If you think youíve found an issue, double check it, try other tests around it, get the dev-ops to look at it (or whoever you are working most closely with - basically a second set of technical eyes)
  2. Keep an eye on the logs and on all the boxes resources - including the test tools
  3. Donít be afraid to ask questions. Yes you are technical but donít let the devs put you down just Ďcos you donít know the inside of their app. In these technical relationships make it clear that you are on their side, you are there to give them confidence in their app - I do often simply ask them if they will sign off their app if they say it doesnít need performance testing!
  4. If you do find issues, look in the logs and see if you can send a more detailed email. Devs like reading logs! so if you do too, thatís a good start!
  5. Otherwise send as much detail as you can and if something can be reproduced, that is always a big advantage.
  6. A lot of performance testing is about relationships. The dev teams want to come to you and say Ďperf test this for me this afternooní but to do that you need all the above in place and working smoothly. And to get those reports out you need to be coordinated with all the other teams. Really this is just a matter of working out processes that work for you, getting all the agreed business communications right, fitting in with everyone elseís booking systems etc. This does come with time so if youíve just started, donít worry!
  7. Run benchmarks and try and design them so you have all the details, so months down the line you can be sure you are comparing like-for-like. These days I add transactions into the test just for documenting settings in the summary report. This makes things much quicker to check across runs.
  8. Watch out for functional issue! The number of times I have uncovered functional issues is surprising. Often this is because we are the only ones testing with concurrent users. Or sometimes itís because of the sheer amount of data we push through the system - so I may test with a million urls and then the problematic ones get highlighted, but the functional testers just canít cover that breadth. (I even had one app once that didnít work at all with concurrent users - and I mean 2 - got a DB lock - it had just never been tested under these circumstance until it got to me)
  9. Try and avoid production, almost at all costs! I have seen some terrible consequences. Load testing can kill systems. You only have to make a mistake in a scenario setting (yes it does happen, we are still carbon based life forms running the IT world!) and you can bring down sites and systems. Even here, during legitimate monitored testing, we did break a front facing CDN. It wasnít meant to break - it wasnít meant to behave like that - but still, if you can, avoid any business critical systems.

Mathematical modelling

    It turns out that performance testing is a lot about mathematical modelling. The main aim is to mimic user interaction with the applications, covering different types of user, different work flows (and specifically their ratios to each other), typical data selection and even ratio of usage across different applications if they share any resources - watch out for DBs being shared.

  1. Talk to all the stake holders
  2. Get walk throughs of work flows
  3. Talk to the business (or BA) about importance and ratios of work flows
  4. Only include the work flows you have to - bear in mind the cost/benefit of your work
  5. Work flows may need to be included because of sheer volume
  6. Some work flows may be very occasional but need to be modelled because they are business critical
  7. Keep work flow scripts simple and try and keep to 5 or 6 per project
  8. Let the functional testers provide test coverage. You are focussed on performance and server stability etc.
  9. Data modelling is critical. You must try and hit production cache ratios. And this isnít always obvious. Some caches are hidden deep within applications or DBs. The best way to be sure of this is analysis production log files for real data patterns.
  10. Sometimes you can replay logs - or at least pull out the data and use it directly.
  11. Other times you need to apply curves to data you have gleaned from DBs - there are some methods doing for this on this site
  12. If you are looking at new apps, have discussions with devs and BAs and try to apply your typical users to this app, as will be expected on go live.
  13. All of this is critical because if your models are wrong, your answers will be wrong.
  14. When it comes to shared resources, watch out for different peak times across different apps.
  15. Apps can be run in isolation, particularly with the benchmark approach, but often you need to run several apps concurrently but with different peak models depending on what you want to look at
  16. Something else: Every performance test should be designed to answer a particular question. And the answer should be useful such that any results will be used. otherwise there is no need to run that test.
  17. When the devs ask you to perf test something, dig a little deeper and find those questions so that you can design your tests accordingly.
  18. And these questions should then be addressed in your results report and in particular in the management summary

Maintenance of scripts and data

  1. As projects develop the scripts need to change with them
  2. This may be changes to endpoints or it could mean additional functionality
  3. This requires you to be on top of all the development work that can affect your remit
  4. You must keep up with the Jiras and make sure any changes are conveyed to you
  5. And benchmarks may need re-settings to take into account acceptable changes in performance - of course these requirement changes do need sign-off with the relevant stake holders
  6. Watch out particularly for CI test packs. These run automatically and can easily be taken for granted. Over time they are likely to creep away from true form
  7. Also, data needs maintaining separately.
  8. Data creeps quite quickly. I have designed CI to combat this - in fact data creeps so quickly I need to do this. But every month or so the core data files on CI projects do need looking at.
  9. Other projects typically need looking at more frequently (I have built in design strategies in the CI that are not typically present in everyday projects)
  10. As data goes out of date the number of errors in a test run increase. This is ok as you can see the details and can sign them off as known data issues. HOWEVER, this factor can easily drown out actual errors. AND it does clog up the log files (in all systems) so you really need to keep these known errors to a minimum. The easiest results to analyse are those with known 100% good data, so any issues can clearly be seen
  11. When it comes to CD (down the line, with bamboo) this data maintenance issue will be more critical. Without any user intervention, someone must take on the responsibility to keep on top of this

CI setup - LoadRunner

CI Setup - webpagetest

Webpagetest local setup and configuration and update strategy

Load test tools and controllers and injectors

    AWS accounts and scripts

    LR project structures and working practices

Local LoadRunner test tool box for script development and small investigations (has license installed)

Projects - old and current and their documentation and data models and specific requirements

Version control and backup procedures

HP support account

Project documentation (and benchmarks) on confluence

Performance test tab in the main Jenkins server

This web site! (for reference). My phone number!

(and please be careful with all this, look after the scripts and scenarios, and all the CI configurations. Watch out for webpagetest - it can be delicate but it is brilliant underneath. And keep the teams sweet and keep on top of everything and it will all run fine for years to come)

 

 

[Home] [About (CV)] [Contact Us] [JMeter Cloud] [webPageTest] [_64 images] [asset moniitor] [Linux Monitor] [Splunk ETL] [Splunk API] [AWS bash] [LR Rules OK] [LR Slave] [LR CI Graphs] [LoadRunner CI] [LR CI Variables] [LR Bamboo] [LR Methods] [LR CI BASH] [Bash methods] [Jenkins V2] [Streaming vid] [How fast] [Finding Issues] [Reporting] [Hand over] [VB Scripts] [JMeter tips] [JMeter RAW] [Dynatrace] [Documents] [FAQ] [Legal]

In the Cartesian Elements Ltd group of companies