The scripts should allow the client (and the developers) to initially validate the findings, and then validate the fixes (or mitigations). Ideally these 'scripts' should be delivered as 'Unit Tests' and should cover a large number of exploit variations (for example for SQLi/XSS vulns, run through the respective FuzzDB payloads)
The only pdf(s) delivered should be the high level analysis with lots of visualizations and graphs (with red, orange and green lines :) ).
This is what I'm trying to do with the O2 Platform and although most consulting companies (and tool vendors) are happy to stay in their (paper/pdf) world the clients (and developers) that I show this in action really get excited about it.
The other key element is that we need to fit our finding's data into whatever systems the client has in place (namely bug tracking and reporting solutions). What the recipient of a security analysis really wants is to transform our output into their world. The first step is the integration with their bug tracking system (if they have one), and usually when the developers are involved, the conversation moves quickly to 'can we use those scripts/automation for more than security analysis' which of course the answers is YES (in fact we should START our security engagements using the developer's scripts (which should already map out the application's attack surface and behaviour))
Unfortunately we currently have a catch-22 situation, since the application security teams are note delivering their findings as scripts (i.e. using automation), the clients don't know that it is possible (so they don't ask for it), and since there is no (or very weak demand), the application security teams don't spend the time (and resources) to move into the 'Findings as Scripts' / 'No more 200 page PDFs' world