Now on ScienceBlogs: Ada Lovelace and the Impact of Positive Female Role Models

Read water posts on ScienceBlogs and download National Geographic's April WATER Issue

Quoted

Profile

DrugMonkey is an NIH-funded biomedical research scientist.

PhysioProf is an NIH-funded basic science faculty member at a private medical school.

Recent Comments

Recent Posts

Blog Promotion

Research Blogging Awards 2010 Finalist

Add to Technorati Favorites

Schwag

HoodedSweatshirt.jpg

Search

Archives

Blogrollin'

Searching for Science

NIH Basics

Drug Abuse Basics

Animal Research

Comeek

Old Quotes

« doin it right | Main

Scientific Research 101: Results!

Category: Conduct of ScienceStatistical Reasoning
Posted on: March 24, 2010 2:33 AM, by DrugMonkey

So you've just completed your last assays on physioprofitin signaling in the Namnezian complex. Lo and behold it is qaz-mediated, just like you suspected and the beccans are off the freaking chart. woot! PiT/PlS ratios are within relevant physiological ranges and still this work of art, your labor of love, came through with the experimental goods.

With a hope and a prayer you run your stats....and YES! p < 0.01!!!!!!!!!!!!!!!

What is the correct way to report your big Result?


Share this: Stumbleupon Reddit Email + More

TrackBacks

TrackBack URL for this entry: http://scienceblogs.com/mt/pings/134721

Comments

1

BTW, that's physioprofitin(TM) and you owe me royalties.

Posted by: Anonymoustache | March 24, 2010 9:18 AM

2

I believe that we can improve what has been done but by far, as they have a good ability to do so and propose it.

Posted by: Prescott foreign pharmacy | March 24, 2010 10:17 AM

3

on my side of the reseaerch pond, we'd use association language for p-values as it's awful hard to infer casuality (which most of the language above requires one to do).

Posted by: JD | March 24, 2010 10:21 AM

4

I can't believe how unpopular "revealed" is! I always say revealed when talking about stats. "Indicate" and "demonstrate" are for the Discussion.

Posted by: Dr Becca | March 24, 2010 10:32 AM

5

ooooooh, this is a pet peeve of mine. DM, you and everyone else who picked anything different than 'other' needs to go back and take stats 101. Review in particular the theory behind p values and the null hypothesis.

I chose 'other', and suggested "...is consistent with..."

Posted by: Joe | March 24, 2010 10:38 AM

6

"Other" responses so far...

Suggested

...is consistent with a...

suggested

Experiment confirmed quaz-mediated... (p<0.01)

showed an association in the

messing with qaz yielded a statistically significant

upregulation of blah blah

supported

made publishable our crappy work on

Active voice: qaz-mediation significantly upregulated physioprofitin ... (p<0

Posted by: DrugMonkey | March 24, 2010 11:29 AM

7

Well, there are two issues in play: 1) the need to understand what statistics really tell us (which is what Joe is talking about); and 2) the need to communicate science in a concise and pragmatic style that isn't going to frustrate the reader, who just wants to know what the fuck happened and what the authors think it means.

To me, I assume that #1 is understood as a professional disclaimer to all published work (except when the applied statistical analyses are clearly inappropriate). If I think their appeal to stats is bullshit, then I can simply draw a different conclusion from theirs. But to be honest, papers that are littered with "mights", "possiblies" "ifs", "buts" and "consistent withs" relating only to the use of statistics, rather than for more pertinent reasons regarding experimental design, equipment limitations &c;, tend to make for tough reading. It also isn't always clear what the writers are being so cautious and indecisive about (are they holding back because they know their experiments were a bit dodgy? Because a competitor is about to bring out a contradictory paper?)

That said, the trend for differentiating between "ultra-mega-eleventy significance!" vs "mildly significant" is a more tangible issue, imho, because it involves an active and explicitly inappropriate interpretation of statistics rather than a passive and stylistically pragmatic one, if that makes any sense.

Posted by: DSKS | March 24, 2010 11:53 AM

8

Hmmmmm... Interesting, DSKS. My reaction is the opposite. When authors imply that a p value less than 0.05 represents either proof or causality, then I assume the authors are either statistically ignorant or hoping salesmanship will substitute for better experiments (or both: ignorant liars). However, when the wording is cautious and consistent with what the measures actually demonstrate, I tend to trust the authors more.

I guess it depends on how you read papers. If you read papers wanting to be told the take home message in the simplest possible terms, then 'This does that' is great. But if you read papers to find out what the authors actually observed, then the literary obfuscation is annoying.

In the end I admit I can't come down firmly on one side or the other, though. Because when I'm on the toilet reading about some asteroid in Science, I want simply to be told whether it's bigger than Manhattan or not and whether it's iron or ice or dust or whatever and whether it's going to crash into the Earth any time soon. But I suppose actual astrophysicist want a bit more detail and less hype.

Which is probably why the glamourmagz use glamourlanguage, and the niche journals are full of 'consistent with' and 'correlated with'. One is toilet reading, the other is archival.

Posted by: Joe | March 24, 2010 12:24 PM

9

I intensely hate most of these choices. "the statistical analysis is consistent with..." is the most accurate.

The most egregious is "proved", which is flat out icky.

I don't care for most of the other choices, though I've seen them all and they may be awkward or inaccurate but they don't OMG Totally ruin the paperz! IMO.

As a grammatical aside, why are "confirmed", "demonstrated" and "established" encumbered with "an effect of of"? They seem to read equally well without the extra words.


I'm not sure the root cause of most of these things is really a poor understanding of stats. For example, I think that the phrase "The statistical analysis (p The conclusions are just "supported"... there have been times I've literally lol'ed thinking: "blah blah supports whose conclusion that...? Not MINE!!!!".

"BTW, that's physioprofitin(TM) and you owe me royalties."
What are you, Craig Venter? Going around trying to claim intellectual property on perfectly innocent biological molecules! Well, ok, chances are physioprofitin isn't innocent (I'm guessing it's upregulation in the Namnezian complex leads to Turrets or something), but you know what I mean.

Also, if the beccans are off the freakin chart, one would think that should be noted in the results.

Posted by: becca | March 24, 2010 1:22 PM

10

Well I guess the polling explains it.

I'm looking at a basket of FAIL on this one. Joe is on the right track and is *almost* there. Kudos. DSKS has, I think, hands on a key that Joe is missing...

Dr Becca....oh boy. Like a moth to the brightest flame. In a word, no.

Posted by: DrugMonkey | March 24, 2010 1:29 PM

11

I'm with the "if you need stats to prove it it ain't biologically meaningful..." but I'm a cranky old asshole. And I have never done a t-test for any of my results, ever. For a lot of cellular physiology stuff, the sample sizes are not so large as to preclude just showing all of them.

Besides, I'd much rather have the support of further experiments to test the point.

But with Joe's point, I was taught to always indicate to the reader where we as authors came down strongly on an interpretation, and where we were kinda throwing out a plausible explanation. In the first case we stuck with simple declarative statements, whereas in the second we used some qualifier (perhaps, possibly, etc.)

Posted by: Nat | March 24, 2010 2:24 PM

12

More "Other"...

demonstrated, for the first [muppethugging] time in [muppethugging]history,

is not inconsistent with

would prefer not to word the sentence that way in the 1st place.

provided good evidence for

did not rule out

Posted by: DrugMonkey | March 24, 2010 2:37 PM

13

Next hint:


There is only one thing that has the right to claim that it can "show", "reveal", "establish", etc an effect.

What is that one thing?

Posted by: DrugMonkey | March 24, 2010 2:39 PM

14

God.

Posted by: Neuro-conservative | March 24, 2010 3:48 PM

15

@DM in response to #13:

Well, it sure the hell isn't any 'statistical analysis' directed solely at computing a p value. To say otherwise is like flipping a coin 3 times, never getting tails, and therefore concluding that the coin has only one side, or perhaps a head on both sides (assuming someone has previously published a paper containing the sentence: "Previous studies failed to find evidence for more than one side on a coin. Here using, three-dimensional reconstruction coupled with platonic solid analysis, we present evidence that coins resemble extremely squat cylinders and therefore have in fact two major sides.")

Posted by: Joe | March 24, 2010 3:57 PM

16

Heart attacks are pretty fucking biologically meaningful, particularly when you are the biological entity in question.

And you still need stats to study biologically meaningful phenomenon in the real world.

There is only one thing that has the right to claim that it can "show", "reveal", "establish", etc an effect.

DM. Maybe it's too early in the morining but I'm torn between mathematical proof and the Scottish Presbyterian Church as an answer. But the sentence is already FUBAR because is starts with "statistical analysis" as the active player when it's really a tool to aggregate your DATA.

Posted by: antipodean | March 24, 2010 4:48 PM

17

Becca,

---"What are you, Craig Venter? Going around trying to claim intellectual property on perfectly innocent biological molecules!"---

C'mon now---I claimed IP on the word itself, as I coined it. Anyone who uses it owes me cash...sweet sweet cash...DM's desparately late attempt at citation via hyperlink is futile. My lawyers are on the case already.

Of course, I should also add that if someone also named a protein 'physioprofitin', and it turns out to be fit for profitable commercial use, then they would also automatically hand over to me the rights to the protein and all associated IP---because I had also functionally defined physioprofitin as something that would "make heaps and heaps of cash when expressed".

And oh....Muhuhahahahahaha.....eeeexellent....

Back to the main issue....DM @13...Proof or Evidence?

Posted by: Anonymoustache | March 24, 2010 5:32 PM

18

blarg.
Just noticed that my less than sign borked my comment.
That bit should have read something like:

I'm not sure the root cause of most of these things is really a poor understanding of stats. Rather, I think it's as much a function of the intrinsic awkwardness resulting from the lack of human agents in scientific writing.
I think the phrase: "Based on the statistical analysis (p less than 0.05), we conclude that qaz-mediated..." is perfectly valid. But because scientific writing tends to avoid putting any people into the situation, we pretend that the stats do something. The stats didn't establish qaz-mediated ..., the experimenters established qaz-mediated... , using stats as one tool.

Posted by: becca | March 24, 2010 5:39 PM

19

"I had also functionally defined physioprofitin as something that would "make heaps and heaps of cash when expressed". "
like I said... venterlike.
:-P

On a only vaguely related note, does anyone know if the Sonic Hedgehog protein namers got sued?

Posted by: becca | March 24, 2010 5:42 PM

20

it's really a tool to aggregate your DATA.

DM @13...Proof or Evidence?

we pretend that the stats do something. The stats didn't establish qaz-mediated

Exactly. It is only the experimental result (aka, the data) that can 'reveal' an effect.

The notion that statistical analysis 'reveals' something sends me around the bend in a foaming froth.

And becca's excuse for this practice
But because scientific writing tends to avoid putting any people into the situation
is wrong. The answer lies in ....the actual answer.

All the statistical analysis does is confirm the likely reliability of the experimental result. That, or similar, is the way it should be phrased. There is no reason to insert a fake statistical person into the description!

Now, anyone want to take a shot at why this is a problem for the way scientists operate (i.e., think about their experimental results), not merely a semantic preference?

Posted by: DrugMonkey | March 24, 2010 5:54 PM

21

i can't stand the thought of starting a sentence with "the statistical analysis..." - it sounds to me like an appeal to a false authority. (which is an arguable point.) say what the data tell us, cite the stats to show that you are not completely full of shit.

Qaz is associated/correlated/whatever with upregulation of physioprofitin in the Namzenia complex (f values, p

psst, reviewer #3 suggests further experiments to demonstrate a causative effect.

Posted by: leigh | March 24, 2010 6:24 PM

22
It is only the experimental result (aka, the data) that can 'reveal' an effect.

The 'data' don't do that any more than a p value does. An 'effect' is a conclusion, an inference, and it is 'revealed' only by some mysterious neurobiological mechanism in the author's mind.

All the statistical analysis does is confirm the likely reliability of the experimental result.

'likely reliability'? No. It says nothing about reliability. It just says the probability of getting the same numerical result from a random sampling of the same set of possible numbers (population), assuming the set of numbers actually fits the characteristics assumed by the particular statistical procedure that is being employed. If every researcher who misused a t-test was forced to eat a statistics book, we would have a surfeit of statistics books in the world and a lot of researchers with paper-bloated bellies. And if every researcher who published a paper misusing t-tests was shot in the head, the NIH grant application success rate would shoot to about 90%, because the number of researchers in the world would be cut dramatically.

In any case, as mentioned above, statistics can't do anything. In particular, it can't conclude anything or infer anything, or even suggest anything. Statistics is only a tool for summarizing data, and statistical tests are merely a set of calculations for determining the overlap between hypothetical sets of numbers. People conclude things, make inferences, and suggestions based on the observations they make, often using statistics as a tool for clarifying and summarizing those observations.

There's no problem with statistics, DM, or the use of them.

The problem is that scientists confuse their observations with their inferences.

Posted by: Joe | March 24, 2010 6:39 PM

23

OK, I think I accidentally did some html-fu there...let me rephrase the point of cutoff and try this one more time....DM, please delete the previous two...
___________

DM@20---"Now, anyone want to take a shot at why this is a problem for the way scientists operate (i.e., think about their experimental results), not merely a semantic preference?"---

Easy peasy lemon squeezy, methinks.
Bcos many scientists tend to, when seeing a P value of <0.01, believe that their observation is now the invariant truth, maybe? They not only tend to forget that it is a likelihood of reliability, as you phrased it, (or reproducibility?) but also tend to forget that their observations were limited to the boundaries of a gajillion different experimental conditions and variables etc, and the P value only tells them the reliability under those very conditions..... But, "I see it. The P value is <0.01. This must be absolute truth. I will now base subsequent and consequent research on this world view"......

Am I close?

Posted by: Anonymoustache | March 24, 2010 6:59 PM

24

I give up.
Pretty much what Joe said.....

Posted by: Anonymoustache | March 24, 2010 7:02 PM

25

It just says the probability of getting the same numerical result from a random sampling of the same set of possible numbers (population), assuming the set of numbers actually fits the characteristics assumed by the particular statistical procedure that is being employed.

I'm not seeing where my "reliability" is too much of a jump from that but sure, your formulation gets at the essential point. kinda unwieldy for a results section though.

An 'effect' is a conclusion, an inference,

fair enough. What's your preferred way to state the result then?

AM, I think the bigger problem is in undervaluing the data point that fails to garner the all-important p-value certification. A category of Type II error, sortof. I qualify this because it has more to with the deployment of statistical procedures than being a feature of the approach itself. A sort of meta-design type of miss.

In behavioral pharm, as one example, we're frequently keen on inferring a "dose-dependent" effect of a drug. Some canonical types insist that you must have statistically significant differences between two drug doses, not just a difference from vehicle in both cases. even if you've observed results which order in an expected line. I don't see where anyone with a brain looks at that middle dose that differs from the vehicle, but not the numerically lined-up dose, and says "well, nope, must not be a dose dependent effect".

My concern is that the Church of the Holy Asterisk pushes people away from really understanding that the data are the driver.

It also leads (and this is especially bad for people who mistakenly think that anything smaller than 0.0499999 is meaningfully more impressive a result) to....animal use issues.

Perhaps this should be our next exercise: Is it justified to use 15% or 25% more animals simply because you think p less than 0.01 is so much more impressive than p less than 0.05?

Posted by: DrugMonkey | March 24, 2010 7:58 PM

26
All the statistical analysis does is confirm the likely reliability of the experimental result.

Dude, you are using "reliability" in a very idiosyncratic way. What the statistical analysis tells you is the likelihood--given more or fewer assumptions, depending on the nature of the analysis--that your conclusion of a difference among groups sampled from populations is due to chance and not due to a difference among the underlying populations. Why are you making this more complicated than it needs to be?

Posted by: Comrade PhysioProf | March 24, 2010 8:01 PM

27

As I already explain in response to nearly the same comment from Joe (great minds think alike!) CPP, sure you are right. but tell me your preferred way to say that in a Results statement...

Posted by: DrugMonkey | March 24, 2010 8:11 PM

28

The poll confirms what's well known - biomedical types are statistically illiterate. Which isn't surprising considering that in most universities one can get a B.S/Ph.D./M.D. without ever taking statistics. The thing is, the p=.01 only means that, given a bunch of assumptions, the chance that the observed difference is completely random is 1%. The devil is almost always in the assumptions part (the "model"). And it does not say that there is a 99% chance that the difference is non-random.

I interpreted the sentence as a statement about statistical treatment of the observed effects and entered "applied to" in the Other.

Posted by: DK | March 24, 2010 8:14 PM

29

Oh, you are askin' for the precision of language smackdown fun games time, aren't ya?
First let's get the word choice issues out of the way:
*"we would have a surfeit of statistics books in the world and a lot of researchers with paper-bloated bellies. "
"surfeit"- I do not think that word means what you think it means. Antonyms more appropriate in that context: "dearth".

*"All the statistical analysis does is confirm the likely reliability of the experimental result."
"reliability" - I do not think that words what you think it means. Closely related synonyms that may be more appropriate in that context: reproducibility.
Reliability: reproducibility :: accuracy: precision.
P values address only precision/replicablity (and that imperfectly, for the reason others have noted- you have to assume the conditions are the same).

*Strictly speaking, from a grammar-fascist perspective, data don't reveal anything.


Now, as far as the philosophical point, I maintain that your objection to this is on par with objecting when people say things like "poison dart frogs evolved colorful markings to ward off predators".
Now, of course poison dart frogs did no such thing, they didn't go about saying "hmm, too many predators here, let us go forth and order some colorful markings from the Evolution R Us catalog"
And we* all know this!
(*we being, for this purpose, all biologists worth their salt and any well-educated layman to boot)

However, I can see the argument that it does sometimes get cumbersome to say "colorful markings arose randomly in poison dart frog populations, and those froggies with colorful markings tended to get consumed less often by predators" (or, conversely "as colorful markings arose in poison dart frog populations, predators that responded by NOT EATING said froggies outcompeted those that ate pretty poison froggies")
DM, your argument about precise statistical phrasing is on par with the view:
"If we say things like "X evolved Y in order to...", of COURSE we will all have ERRONEOUSLY purposeful views of evolution as a force that requires intelligent design or intent, and we will all therefore believe in god or the like and science will be unmade!!!1111 DOOM! DOOOOOOOOOM!!!!!1111eleventy"

Look, there are many words/phrases/verbal habits that we all use that, if taken literally, are nonsensical. But many times they still represent, in context, something perfectly reasonable.

Saying "the statistical analysis revealed an effect of qaz-mediated..." MEANS "judging by the statistical analysis, the odds of this seemingly qaz-mediated.... arising by some chance non-qaz-mediated mechanism is less than 1/20" [for p less than 0.05, of course]. If you don't read the former and mentally substitute the later, YOU are being sloppy (but not the writers of the paper).
Now, if you have a psychological study showing that people reading the type of statement you object to actually leads to incorrect conclusions (or, for that matter, that purpose-sounding phrases about evolution lead to intelligent design support) I'll A) apologize profusely and B) eat my hat. If, however, you are just ASSUMING that the phrasing you find so objectionable leads to sloppy science... well then. You should perhaps eat a linguistics text (Or read a poison dart frog)

Posted by: becca | March 24, 2010 8:23 PM

30
All the statistical analysis does is confirm the likely reliability of the experimental result.
This isn't just misleading, it's wrong. Reliability can only be confirmed by multiple studies with different samples. A single study says nothing about reliability.

To your poll question, tell me what the sample size and estimated effect size are. Your effect size could be tiny and meaningless, but you can get p

Posted by: lylebot | March 24, 2010 8:48 PM

31

p less than 0.01 if n is huge. (to finish that thought)

Posted by: lylebot | March 24, 2010 8:50 PM

32

I wonder what becca would think of my pet peeve of people misusing the word "significant" instead of "important?" You know, what with her mental substituting and all...

Posted by: Isis the Scientist | March 24, 2010 10:33 PM

33

DM: I stand by my original suggestion: "...is consistent with..."

Comrade: 'Reliability' to me implies more than probability. But I agree that it's open to interpretation, and accept that my interpretation might be uncommon. In my defense, though, I'd offer that 'reliability', given it's fuzziness, might not be the perfect word choice.

becca: You're right. I used 'surfeit' completely backwards. My bad. But I don't think you mean 'precision' either where you suggest that. 'Reproducibility' I might go with, though. Maybe. At least more than 'reliability'. I like what lylebot says about 'reliability' in #30.

I like what DK says in #28.

In the end, what we're wrestling with here is the tension between simplicity in communication and accuracy in communication. The ultimate in accuracy is to show every data point and not use any statistics at all. But who wants to read The Journal of Long Tables of Results Without Commentary ? So we need some summary and interpretation. Summaries will always leave out details, and interpretation is always a 'polishing' of reality. The question is how much simplification and polishing is reasonable. Like I said earlier, I think ultimately that depends on the audience.

Maybe 'supplemental information', instead of the bullshit it typically is now, should actually be raw data and/or pre-statistics tables of measurements for all the stuff that is summarized in the main figures and text. Then a paper could be read on multiple levels. If you just want the take-home message, read the abstract. If you want more detail, read the paper. And if you really need to go deep into the recordings for some reason, get the supplemental info. This plan would probably preclude a lot of the retractions too, because every figure or statement in any text would have the corresponding data and steps in the analysis somewhere in the supplemental info. People couldn't get away with shit. And the hassle for authors would be minimal. They should have the raw data and analysis for everything in the paper already anyway, right?

Posted by: joe | March 24, 2010 10:46 PM

34

i feel sorry for biologists.

you must be constantly dealing with prob/stats incompetance on a daily basis to have a vigorous debate over such a simple concept.

BTW-I voted "revealed an effect of" even though I don't really know what that means cuz:
-i didn't like any of the choices
-it was the last one
-who the hell ever bothers typing something into "other"

Posted by: youngphysicist | March 24, 2010 11:40 PM

35

Lylebot- The beccans dude! Look at the beccans. And the PiT is *tight* I'm telling you! Heck I could just present those instead of the bunnyhopping PiT/PlS ratios and it would *still* be convincing to the naked eye. "effect size". Sheesh. This ain't fMRI homie.

Posted by: DrugMonkey | March 25, 2010 12:26 AM

36

DM -- I am guessing that you received some negative reviews on a manuscript, accusing you of overstating your interpretation of the data. I am also guessing that you need to suck it up, admit you were wrong, and run some more animals and/or tone down your conclusions.

Posted by: Neuro-conservative | March 25, 2010 12:32 AM

37

You would be guessing wrong N-c. More accurate would be to speculate how many papers I get to review that use "the stats revealed!!!11!" as the results style. also to wonder about the number of papers I review that act as if non-statistically-significant differences are meaningless and/or proof of "no-difference".

Posted by: DrugMonkey | March 25, 2010 1:31 AM

38

Dr. Isis- when I read a scientific paper, unless the context is very unusual, my default assumption is "significant" = statistically significant. The effect size may be very small. Or the effect size may be ginormous and it may still be, from a biological perspective, twice as dull as toast.
When I read a standard newspaper or many blogs, I'm more likely to assume "significant" is supposed to mean "important". That said, when a newspaper or blog is reporting on a scientific topic and they use the term in an ambiguous way, it can be irksome.

So if your pet peeve concerns papers containing statistical analysis where people use "significant" instead of "important", I'm right with you. If, on the other hand, somebody is talking to you and says something like "healthcare reform is a significant issue" and you glare at them and rant about P values, you might be on your own.

Really, communication isn't hard. It's just miraculous.

joe- I am not suggesting the use of 'precision'. Instead, I said the relationship between precision and accuracy is analogous to the relationship between reproducibility and reliability. Analogies... those are them things where you compare one thing to another.
(for reference: in the Blue Collar Comedy Tour movie, Bill says "Yeah. I like to use analogies in my show."
[Larry has a confused look on his face. Bill leans over and stage whispers]
Bill: That's where they compare things... )

Posted by: becca | March 25, 2010 1:49 AM

39

Look, all you're saying with statistics is that you are pretty sure your difference - whatever it is - isn't solely due to chance. Whether you choose to set your N to perceive a difference at p less than 0.5 or less than 0.05 or less than 0.01, assuming you are in the right ballpark about the strength of the effect, you're taking a risk it's still due to chance. Moreover, you're basing your perception of that chance on your possibly flawed understanding of certain properties of the true distribution of whatever you're measuring.

I think you chose this topic to generate a bazillion comments truncated at the less-than sign.

Posted by: ginger | March 25, 2010 6:44 AM

Post a Comment

(Email is required for authentication purposes only. On some blogs, comments are moderated for spam, so your comment may not appear immediately.)





ScienceBlogs

Search ScienceBlogs:

Go to:

Advertisement
Read ScienceBlogs WATER posts and download National Geographic's Water Issue.
Read ScienceBlogs WATER posts and download National Geographic's Water Issue
Advertisement
Collective Imagination

© 2006-2009 ScienceBlogs LLC. ScienceBlogs is a registered trademark of ScienceBlogs LLC. All rights reserved.