For me this is the most useful addition to the original solution and in fact it's an add-on and can be used with the original solution even if the other changes are not required. There is one caveat to that statement: The original solution outputs data to .jtl files. This needs changing to use .csv files if you want to use Jenkins as used here. See below.
For CI performance testing we need a pass fail criteria that can be used to stop the build. Typically I would recommend using something like 95th percentile of particular transactions (as used here). This could be changed to 90th percentile if you have a particularly variable test environment and you don't want the build to break unnecessarily. Or other metrics could be designed from the available data, perhaps using some of the specific results files described on the JM Results page.
If you look at the collated output from the 'Generate Summary Results' listener (that this solution must include), you can see that all the named requests from the test script are listed, with their response times in the second column: