2024年3月16日发(作者:川崎h2r游戏手游)

Australian Prescriber Vol. 23 No. 6 2000

E D I T O R I A L

Efficacy, effectiveness, efficiency

John Marley, Professor, Department of General Practice, University of

Adelaide, Adelaide

Index words: drug utilisation, cost-effectiveness, drug

evaluation.

(Aust Prescr 2000;23:114–5)

How is it, that guidelines for treatment often seem unrelated

to the patient sitting in front of the doctor? Guidelines are

mostly based on evidence gathered from randomised controlled

trials. These trials are very good at assessing efficacy – that is,

can a treatment work? Despite this, trials are not without

substantial biases. Many people may be screened before a few

are chosen to be included in a study, yet the results of the study

will be applied to the very people who were excluded. The

population studied in trials tends to be young, male, white,

suffering from a single condition and using a single treatment.

Most patients, at least in general practice, do not fit this

description. They often have multiple illnesses, take multiple

medications and are either too young or too old to have been

included in clinical trials. Perhaps we should accept a proposal

to define efficacy in relation to medications as ‘the extent to

which a drug has the ability to bring about its intended effect

under ideal circumstances, such as in a randomised clinical

trial’.*

In this issue…

The new drugs reviewed in this issue have all been

assessed for safety and efficacy. Although a treatment

may be efficacious, John Marley points out that it may

not be effective or efficient.

Heart failure needs effective treatment, but there are

often difficulties in managing the condition. Henry

Krum suggests some solutions to these therapeutic

dilemmas. Peter Fletcher believes that beta blockers are

the solution for some patients, even though these drugs

were once contraindicated in heart failure.

While the cost-effectiveness of bisphosphonates may be

questioned, they do have a role in some patients with low

bone density, particularly postmenopausal women.

John Martin and Vivian Grill inform us how the drugs

work, while Peter Ebeling discusses their clinical use in

osteoporosis.

The most effective treatment may not be a drug. In his

article on panic disorder John Tiller tells us that cognitive

behaviour therapy helps many patients. One of these

patients is actor Garry McDonald who reveals how he

overcame his anxiety.

114

Efficacy is not the same as effectiveness.

1

A treatment is

effective if it works in real life in non-ideal circumstances. In

real life, medications will be used in doses and frequencies

never studied and in patient groups never assessed in the trials.

Drugs will be used in combination with other medications that

have not been tested for interactions, and by people other

than the patient – the ‘over the garden fence’ syndrome.

Effectiveness cannot be measured in controlled trials, because

the act of inclusion into a study is a distortion of usual practice.

Effectiveness can be defined as ‘the extent to which a drug

achieves its intended effect in the usual clinical setting’.* It

can be evaluated through observational studies of real practice.

This allows practice to be assessed in qualitative as well as

quantitative terms.

2

Australia is well suited to conduct observational studies because

we have a high standard of relatively unrestricted practice and

good national databases, such as those held by the Health

Insurance Commission. These databases can be used for

validating researchers’ separate database effectiveness

studies. In America there are very large patient databases

held by the Health Maintenance Organisations. Their size is

impressive, but size is not everything. The data may have been

collected primarily for billing and they may be incomplete.

Clinical practice is often governed by protocols, and

medications are limited to those supplied by the current

preferred providers. The reimbursement mechanism for doctors

may mean that they code conditions at the highest severity

level. Patients belonging to one of these organisations may not

represent the American population as a whole. In Britain, the

General Practice Research Database, compiled from practice

electronic records, is very useful, especially for studies in

pharmacoepidemiology. The British enjoy relatively

unrestricted clinical practice, but they do not have readily

usable national datasets against which to check the validity

of their database studies.

It is an irony that drugs are licensed for use almost exclusively

on the results of controlled trials, yet they are withdrawn from

use because of observational data that would not be acceptable

to licensing authorities. Biases are present in observational

studies, just as they are in trials, but they can be defined and

often controlled for, giving these studies a much greater value

than that currently awarded to them.

*From a suggested dictionary of pharmacoepidemiology by

C. Ineke Neutel, University of Ottawa Institute on Health of

the Elderly, Research Department, SCO Health Services.

43 Bruyere Street, Ottawa CANADA K1N 5C8.

Australian Prescriber Vol. 23 No. 6 2000

Efficiency depends on whether a drug is worth its cost to

individuals or society. The most efficacious treatment, based

on the best evidence, may not be the most cost-effective

option. It may not be acceptable to patients. In every country,

rationing of health care is a reality. There is no country,

however wealthy, that can afford to deliver all the health care

possible to the whole of its population at all times. Rationing

may be implicit or explicit, but it will happen. Good

effectiveness and efficiency studies will make this rationing

more informed.

Good practical guidelines, such as the Therapeutic Guidelines

series, are clearly very important and extremely useful. They

could be made even more relevant to the patient in front of the

doctor, by being less dependent on efficacy studies. We should

make more use of effectiveness and efficiency studies and

abandon the censorship of the evidence drawn from them.

R E F E R E N C E S

B. Can it work? Does it work? Is it worth it? Br Med J 1999;319:

652-3.

algh T. Is my practice evidence-based? Br Med J 1996;313:957-8.

E-mail: @

Letters

Letters, which may not necessarily be published in full, should be restricted to not more than 250 words. When relevant, comment on the letter is sought from the author.

Due to production schedules, it is normally not possible to publish letters received in response to material appearing in a particular issue earlier than the second or third

subsequent issue.

Prescribing by numbers

Editor, – It was interesting to see an article on the number

needed to treat (NNT) (Aust Prescr 2000;23:38). NNT is

better than looking at relative risk reductions but NNT still

does not always give you a feel for the relevance of an

intervention.

I believe clinical decision-making needs to consider two

numbers. These are the paired absolute incidences.

X =Event rate control (the outcome with placebo,

or the outcome if you do nothing)

Y =Event rate active (the outcome with treatment)

Consider a room full of 100 people with a clinical problem.

Put it to them, ‘Do nothing and the event will happen to X of

you, and if all of you take the pill it will happen to Y of you.’

Using the Helsinki Heart study as quoted in the article, how

would 100 men respond if told ‘Take gemfibrozil for five

years and 4.1 of you will have an event, do nothing and

2.7 of you will have an event’? I suspect many would say why

bother with treatment, but some would say OK.

Clinical decision-making needs to be made in the context of

real people. Other comorbidity, patient attitude, patient

expectations, the psychological burden of disease label,

adverse effects, secondary costs (for example, more visits to

the doctor) all need consideration. I believe that by looking

at the two numbers (X and Y) I can get a better feel for the

relevance of an intervention, and also inform my patients

about ‘doing something’ versus ‘doing nothing’.

I believe the treatment of risk and risk factors is greatly

overrated, and that many are treated for risk without a

genuine consideration of how much of a difference it could

make for the individual. As the surgeons learn to withhold

the knife, I believe we should learn to hold back the drug

treatment of risk factors, not because there is no evidence, but

because in the bigger picture it is irrelevant to the patient –

this will be facilitated by looking at the X and Y numbers.

Paul Neeskens

General Practitioner

Hervey Bay, Qld

Medicines and the media

Editor, – The Australian Prescriber editorial (Aust Prescr

2000;23:70–1) regarding reporting of medicines in the media

is timely. On 13 April 2000, an article in the Adelaide

‘Advertiser’ included the headline ‘Accepted safe levels of

cholesterol “still too high”’ and pictured a young woman

having a cholesterol test. The commentary continued,

‘Worldwide evidence proved “normal” cholesterol levels in

healthy men and women were too high, an international

authority on heart disease said in Adelaide yesterday’. The

article went on to talk about ‘...a new ultra-low dose

cholesterol-reducing drug called cerivastatin, ...recently

approved for use ’

Assuming a new study had been released assessing health

outcomes associated with cerivastatin, we contacted the

reporter. He could not provide any information to support

the story, but suggested we contact the Adelaide marketing

company publicising the visit of the overseas specialist. The

marketing company supplied their media release, but could

not provide a reference. They reported the media release was

redrafted from one produced by a Sydney company. The

Sydney marketing company also could not provide a

reference. They said their media release was based on

information supplied by Bayer, but they had returned all

material to Bayer.

We rang Bayer on five occasions. The product manager was

never available to speak to us, nor has he returned our call.

The Adelaide marketing company, however, was more

sympathetic. They rang us back to say the West of Scotland

Coronary Prevention Study, a 1995 study involving

pravastatin, was the basis for the story. Was the story ‘news’

or advertising? How can consumers tell the difference?

Libby Roughead and

Andrew Gilbert

School of Pharmacy and Medical Sciences

University of South Australia

Adelaide

115

更多推荐

游戏,川崎,手游,作者