Research studies and surveys of the legal sector have been a feature of business life for some time now - and any number can be expected in the run-up to, and beyond, the liberalisation of the legal services market. But can you trust the results of the surveys you read? Would your law firm be better off commissioning its own surveys?

The subject matters range across merger prospects and finance, to trainee satisfaction, Bribery Act preparedness, outsourcing intentions and diversity. There are salary surveys, green surveys, marketing and market sentiment surveys. They claim to show instruction trends and ‘place to work’ data, and most claim to provide ‘benchmarking’ data.

But what is a reliable sample from which to make a ‘claim’? Is ‘Survey Monkey’ playing a useful role in professional life?

Defining the pool

Controlling the survey sample is a key issue for any piece of research to address. One-off closed questions, which give no picture of the person answering, will run the risk that even quite a high number of returns are unrepresentative - just as a weekday mid-morning vox pop on a local high street, or a daytime radio phone-in programme rules out most people in work or education. An online survey embedded in a webpage that asks, ‘Are you valued in your job? Yes/No,’ will produce a talking point, but (probably) not reliable data.

As Gavin Ingham Brooke, chief executive of communications and research agency Spada, puts it: ‘It is fair to say that some so-called studies are superficial and/or spurious, and care needs to be taken to avoid making business decisions based upon partial or skewed results.’

So, some control over who is being asked is crucial to the process. Alex Wright, head of research at Winmark, whose regular projects include annual surveys of both general counsel and managing partners, explains: ‘It is important to ensure that any bias is minimised in who is interviewed.’ That reduces the need to ‘weight’ the results, a process that, given the lower numbers associated with a very specific survey pool, can increase the risks of an unreliable result.

The use of technology has produced gains and losses in the survey process. Among the plusses is the ability to control and monitor a wide survey pool. Technology has also reduced the time spent by researchers requesting data and uploading the results for analysis. Once there is a reliable and representative list of emails, a letter can invite recipients to a survey website which can: recognise who they are; allow, or disallow, people to whom the link has been forwarded; and identify who might be sent chasing emails.

The software allows for snap analysis of results even while the survey is ongoing. Researchers can also ask ‘cross-tabulation’ questions, finding out with little effort if there is a meaningful link between answers given in two separate questions.

Face to face

Many legal sector surveys provide comment boxes to go with the questions. But for research projects generally, the trend away from using face-to-face interviews is viewed in part as problematic.

As one academic posted on the website socialsciencespace.com: ‘The development of telephone surveys in the 1980s, and the development of internet panel surveys in the 1990s has led to a decline in the abilities of market research companies to provide large-scale, high-quality, face-to-face surveys.’ This, they note, has a knock-on effect on organisations which seek to commission ‘high-quality data collection’. Therefore, ‘with less capacity comes higher prices’.

Attempts to resolve the tension between quantitative and qualitative data are processes that can determine the utility of a project for its target audience. Consultant Maureen Broderick, author of The Art of Managing Professional Services, explains: ‘The most valuable research combines both quantitative metrics with qualitative feedback and narrative.’

The ‘tough part’, she notes, is the ability to synthesise the data, draw conclusions, and present a meaningful ‘big picture’ analysis of what the research means and what the implications are for the industry; otherwise ‘it’s just a bunch of statistics’. Broderick’s book was based on around 100 interviews that also provided quantitative data.

And while just 17 were law firms, the work has been well received and endorsed by the larger and international firms which are the book’s target audience.

Even if the whole sample cannot practically be based on face-to-face interviews to cover the whole survey base, a few interviews to sense-check and interpret the data are increasingly seen as key components in surveys of professionals. In some cases they may even be the sole basis of a report. As Ingham Brooke argues: ‘There is nothing inherently wrong in releasing a white paper based upon, say, a dozen interviews – provided the interviewees are established authorities and the findings are not overly relied upon.’ Papers based on desk research, he adds, can also be incredibly valuable, ‘provided the researcher has the skills, sources and insight to synthesise authenticated knowledge into a fresh take on an issue’.

Forever bursting bubbles

Eduardo Reyes recalls working with a law firm in 2007 on a survey that predicted the problems of a debt bubble

Understandably, survey and research sponsors look to do ‘something different’ with the work that their sponsorship is making possible. That can take the researcher who is gathering and handling the data into new territory.

The ‘something different’ one international law firm wanted to do in co-operation with the magazine I then edited for corporate counsel was a survey of the various players in the European leveraged finance and private equity community.

As with so much in finance, it is the private slang and technical language of banking, rather than the basic principles, that makes the sector appear difficult. One needs to know what the ‘second lien and mezzanine markets’ are to ask questions about them - but one does not need to have known for very long.

For any piece of research, there is always concern about the size of the sample returned - how much is enough? This survey got 40-plus complete sets of answers from senior figures at banks, partners from sponsors (such as private equity bosses), and insolvency specialists from leading firms. That looks like a small number at first glance, but when we started to look at who they were from, and the market share and market spread they represented, it was clear we had a solid sample.

The banks which answered had been bookrunners for over $60bn (more than half) of the European leveraged buyout loans in 2006. The sponsors accounted for $30bn of invested funds, and insolvency practitioners and turnaround specialists were from leading firms in the field. One of the first returned was a handwritten, faxed survey form from a leading private equity figure.

There is a perception that ‘50 to 100’ is the ‘good’ number for credibility or reliability - and there are sponsors who have declined to publish as a result of failing to meet one of these thresholds.

But the answer-sets on this survey allowed projections to be made which were borne out by events. Most participants thought ‘current levels of activity’ were ‘unsustainable’. The bankers were more optimistic about the economic outlook than sponsors and turnaround specialists. And, tellingly, there was a dissonance between expectations for growth in debt multiples, and fears around default and economic downturn.

It was striking that the banks appeared more focused on the activity of competitor banks than on macro-economic threats.

Rereading the report four years on, it is striking how much was - quite accurately - ‘sweated’ from this survey sample. That wasn’t down to me, but to the two law firm partners who put together a set of questions that those asked to participate could readily observe were relevant. We also had a good email list for the senior people to be approached.

When that first faxed survey came straight back, it was clear that the right questions had been asked, and that other good data was likely to follow.

If he could see the point of the survey, even in a crowded survey market, others would too.

The questionnaire

The key to achieving both a good response and usable results is a set of questions which respondents can see the point of. In professional services and client surveys, that means a question-set that is intelligent, timely and specific.

It is common for a survey invitation to claim that valuable ‘benchmarking information’ will be shared with participants. There are two points that either validate, or undermine, this claim. The first is the inclusion in the survey of other firms or clients who have a similar profile to the person completing the questionnaire. Equally important, though, is a question structure in which the answers given ‘test’ the validity of one another.

When it works well, that structure can reveal a series of gaps. Typically, this may be a ‘say-do’ gap. Someone answering may say ‘red tape’ damages their business, for example, but when it comes to how they made losses, financial problems or the practice of competitors may be the culprits. A company may say it has allotted senior responsibility for Bribery Act compliance, but failed to do the most basic of other preparations.

Some research also looks for gaps in perception between the different groups who interact­. That may be internal - different answers to the same questions given by a managing partner and a finance director - or it may be different scores given by a firm’s members from those given by clients for performance, service and excellence.

For a survey sponsor, such gaps produce ready press headlines. They can also provide a professional firm with usable customer data on what they should do better. Large gaps in internal and external perception point to serious weaknesses.

As Ingham Brooke notes: ‘Insights derived from research can deliver significant value to firms and practitioners. Firms employ research to win new clients, deepen relationships with existing ones, enhance internal engagement and knowledge, and even shape public policy.’

In professional services, Wright notes: ‘The target audience is difficult and expensive to research, and they are very time poor.’ Most are sympathetic to the need for a survey sponsor to find a headline for the launch of the results.

But for the results to avoid a deservedly sceptical response, and to have utility and a shelf-life, the research needs to impress on basic but important points. The survey sample has to be known and relevant; the questions timely and intelligent; and any interpretation needs to be done with the help of expertise which the professional reading the final result can respect.

As Ingham Brooke concludes: ‘Researchers must be transparent as to methodology, any limitations of the research, and their funding sources.’

Mirror mirror: studied reflections on law and legal services

The LMS Financial Benchmarking Survey, May 2011

The 11th Financial Benchmarking Survey by the Law Society’s Law Management Section surveyed 200 firms across a range of financial and business data, concluding that consolidation was a key theme as practices emerged from a difficult period.

Standout finding

  • Law firms are slowly rebuilding profitability and beginning to hire again;
  • Practices are bearing down heavily on non-salary overheads to boost the bottom line.

Cutting through, winning big: business development best practice in hard times, May-June 2011

Produced by legal research company Jures and commissioned by LexisNexis, this was based on a survey of managing partners, senior partners and marketing partners at 101 mid-market private client and commercial law firms.

In-depth interviews supplemented research data.

Standout finding

  • Almost half of all respondents (48%) had changed their business development strategy in the last 12 months as a result of the LSA.

NetworkMP Market Survey 2010

Produced each year by research, development and networking company Winmark, the benchmarking survey solicits replies from its managing partners network and a longer list of non-members. There were 95 replies for the 2010 survey, 72% coming from the legal sector.

The dominant subject is economic outlook for the year ahead, and the response of those answering to market challenges over the past year.

Standout findings

  • 54% of managing partners expected ‘stagnation or worse’ for the economy over the next year;
  • In 2010 the proportion of firms involved in M&A discussions increased to 42%.

LSB Research Note August 2011

A wide-ranging piece of desk research weighing in at 70-plus pages, it focuses on econometric and regulatory data. The Research Note seeks to help the over-arching regulator to ‘understand current drivers and trends in the market... crucial to our role as an oversight regulator’.

Similar research for its general counsel network (the CLO Programme) pointed to differences between client and law firm perspectives on service and billing.

Standout finding

  • The note dealt in some detail with the possible impact of ABSs on diversity within the legal profession. Despite significant progress in encouraging more diverse entry to the profession, the ‘trickle-up’ effect that might have been expected has not yet occurred at the level necessary to produce greater diversity at senior levels and in more commercially attractive parts of the sector.

Legal Services Consumer Panel, Will-writing report, July 2011

In a mystery-shopping exercise, 101 wills were reviewed by a panel of experts, including solicitors. The experts who were judging the wills had no idea whether they had been drafted by solicitors, will-writers or a member of the public.

Standout finding

  • One in four wills was ‘failed’ by the panel, and more than one in three was scored as either ‘poor’, or ‘very poor’. Just as many failed wills were drawn up by solicitors as will-writers.

Hitting the Green Wall, and beyond, June 2010

A joint project between communications agency Spada, law firm Taylor Wessing and the British Property Federation, the report was based on 850 responses from lawyers, developers, surveyors and bankers involved in the development industry.

Standout finding

  • Nearly 60% had used some form of ‘green agreement’ such as a ‘green lease’.

Research by the 360 Legal Group, March 2011

The consultancy group looked at preparedness for the full implementation of the Legal Services Act 2007 at 58 smaller firms.

Standout finding

  • 46% said they were ‘actively considering’ becoming an alternative business structure.

Crowe Clark Whitehill’s legal benchmarking survey, September 2011

The accountancy firm intends to make this an annual survey. The results form an online tool that allows clients and survey participants to benchmark financial and performance data by a range of criteria.

Standout finding

  • As firms increase their turnover beyond the £5m mark, their level of profitability and efficiency tends to plateau rather than increase in line with the turnover figure.

Now is the time to prepare, June 2011

This was an analysis of UK businesses’ readiness for the Bribery Act 2010 by Russell Jones & Walker. Findings were based on 78 responses from senior managers working in construction and property, media and entertainment, and financial services.

Standout finding

  • 55% of businesses either ‘suspect’ or are sure that they have lost out to a competitor due to their excessive corporate hospitality. Only 3% believed this had ‘never’ happened.