Talk:Iraq War/Archive 30
This is an archive of past discussions about Iraq War. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 25 | ← | Archive 28 | Archive 29 | Archive 30 | Archive 31 | Archive 32 | → | Archive 34 |
Moving AP Death Toll
I've moved the civilian death toll compiled by the Associated Press (AP) to the heading Documented civilian deaths from violence, and I've reordered the headings to put Estimated violent deaths first. The rationale for each change is as follows:
- The AP count is not an attempt at estimate the total number of violent deaths in Iraq. It is a tally of figures from the Iraqi Ministry of Health and the AP's own count. The AP report (which I have linked in the article) states, "The number is a minimum count of violent deaths. The official who provided the data to the AP, on condition of anonymity because of its sensitivity, estimated the actual number of deaths at 10 to 20 percent higher because of thousands who are still missing and civilians who were buried in the chaos of war without official records." The report goes on to say, "Experts said the count constitutes an important baseline, albeit an incomplete one. Richard Brennan, who has done mortality research in Congo and Kosovo, said it is likely a 'gross underestimate' because many deaths go unrecorded in war zones." The AP count is clearly a tally, not an estimate of the total number of violent deaths. It belongs in the same category as the Iraq Body Count figures.
- The best estimates of the total estimated deaths from violence belong first in the list. The best estimates today come from statistical studies, like the Lancet studies and UN study. The tallies (i.e., the IBC and AP counts), which are universally acknowledged to be incomplete, and moreover an insufficient means of coming up with an estimate of the number of war deaths, should come afterwards.
-Thucydides411 (talk) 16:20, 27 April 2014 (UTC)
- I think you make pretty clear above that you are using idiosyncratic definitions and personal value judgments about the relative merits of the different sources in order not only to choose which ones are supposedly 'better' and therefore should come first according to your POV, but also which ones should be blanked out to prevent people from even making different evaluations themselves, such as, for example, preventing readers from even knowing that CoW exists to make their own -judgments. The way you are using the issue of "estimated" is laden with opinionated judgments and assumptions that you're trying to impose on readers. Whether or not something constitutes an "estimate" or not is a subjective matter of semantics and of low relevance. IBC has been widely described as an "estimate" in accepted sources. I don't know about whether AP has, but this doesn't really matter. The value of any source is not determined by whether it meets each person's (differing) usage of that word. For example, New York City has an official number of murders each year. Is that number an "estimate"? Some would say no, some would say yes. But the answer to that question is pretty much irrelevant. The best possible number is the true number, not an "estimate" of it. What matters here for a table giving sources for violent deaths in the Iraq War, is whether a source is widely cited in reliable sources giving numbers that address that question, and the sources here are. All of the sources have complexities and differences of method, and most of them point to limitations in their approaches and say that there could be more deaths than in their final numbers (AP is hardly unique on that score). I'd also point out that your POV-laden approach has chosen to place what has to be the single most widely disputed source out there (Lancet) first as the "best". Your approach here seems to be more ideological than rational.Billbowler2 (talk) 18:20, 27 April 2014 (UTC)
- I'd also add, what does it say about your arbitrary classification scheme that one of the two "estimated" numbers (IFHS) is actually closer to the "documented" numbers than it is to the other "estimated" number? I think it's just one more reason that imposing your classification scheme (and blanking preference) is arbitrary and idiosyncratic (with some not too hidden POV-pushing piled on top).Billbowler2 (talk) 18:29, 27 April 2014 (UTC)
- I'm not the one making the distinction between estimates of total deaths and tallies of reported deaths. The studies themselves are the ones that make the distinction. For example, the Costs of War project, which you insist on including as an independent count on the Casualties of the Iraq War page, explains the distinction as follows:
- Whether or not the resulting numbers from the cluster sample survey research are valid, cluster sampling from Iraq and other conflicts does show that reliance on media reports of death undercount the true number of dead.
- The AP wire that we cite includes the following remark:
- Experts said the count constitutes an important baseline, albeit an incomplete one. Richard Brennan, who has done mortality research in Congo and Kosovo, said it is likely a "gross underestimate" because many deaths go unrecorded in war zones.
- The basic point that these experts make is that there are two ways to go about measuring casualties in a war. The first method is to tally figures from media reports, morgues, and government agencies (e.g., the Iraqi Health Minstry). This method gives a baseline number, the minimum number of deaths in the war. The second method is to conduct statistical surveys, finding out how many people have died in a random sample of the population, and extrapolating to the entire population, much in the same way as political polls are conducted throughout the world. This method gives an estimate of the total number of deaths, rather than a baseline figure of the minimum number of deaths. The two methods measure different things. They should be separated, and the difference explained. Putting them together is just confusing to the reader. -Thucydides411 (talk) 18:48, 27 April 2014 (UTC)
- I'm not the one making the distinction between estimates of total deaths and tallies of reported deaths. The studies themselves are the ones that make the distinction. For example, the Costs of War project, which you insist on including as an independent count on the Casualties of the Iraq War page, explains the distinction as follows:
- By the way, your accusations of POV-pushing could be equally thrown back at you. I'm arguing for numbers that measure different things to be put in different categories. I'm not arguing about the validity of any one statistical study, like the Lancet survey or the Iraq Family Health Survey (both have come under heavy criticism: Lancet for using too few clusters, IFHS for relying on media tallies for the provinces with the greatest levels of violence and for being conducted by a party to the conflict). I'm arguing for clarity in the article, and I'm basing my argument on what basically everyone says about tallies vs. statistical surveys. -Thucydides411 (talk) 18:56, 27 April 2014 (UTC)
- If these "experts" are claiming "there are two ways to go about measuring casualties in a war", then they're wrong. One could try to impose all kinds of distinctions. Some would hold up better than others. The one which claims that the difference in numbers here is explained by this particular "tallies/statistical" distinction fails on multiple levels, the IFHS-Lancet divergence being an obvious example. And I don't think you are trying to let readers decide or you wouldn't be trying to blank out credible sources like CoW and prevent readers from drawing their own conclusions about it that might differ from yours.Billbowler2 (talk) 19:43, 27 April 2014 (UTC)
- You disagree with the experts. That means you should go ahead and revert your own removal of the distinction. Regardless of what you believe, Wikipedia is based on reliable sources, not on our opinions. By the way, the IFHS-Lancet divergence is not as great as you think, if you look into what statisticians have said about the IFHS survey. IFHS did not survey the most violent regions of Iraq (including Baghdad and Anbar province), and uncovered 400,000 excess deaths (versus 650,000), and given the huge spike in nonviolent deaths after the war (according to IFHS), it has been argued that a large fraction of those deaths were falsely claimed as nonviolent to the surveyors, out of fear of retribution. But this is tangential to the point, which is that experts believe statistical surveys to be the proper way to determine death tolls in war zones. Counts like IBC do not attempt to count total numbers of deaths, nor do they claim to. They give an absolute lower bound on the number of dead in the conflict. Costs of War should be discussed on the other page, but since it is a reprint of the IBC figure, I don't think we should include it as a separate number. -Thucydides411 (talk) 22:44, 27 April 2014 (UTC)
- You're saying a lot of false nonsense here, but there's no point in debating all this. Instead, I'll just share some other academic research that disagrees with you and your spin on cherry-picked "expert" statements. I'd suggest this article on the Lancet study, which concludes, among other things, that the Lancet study is so high because it is fraudulent, and sources like IBC or IFHS are both much more accurate. It also demonstrates many statements made by some "experts" on this topic have been wrong: http://www.informaworld.com/smpp/ftinterface~content=a921401057~fulltext=713240928~frm=content
- You can find a bunch of other material like this from "statisticians" and other "expert" types in the criticism section of the Lancet study page. You're just cherry picking conclusions you like for ideological, not rational, reasons.Billbowler2 (talk) 23:46, 27 April 2014 (UTC)
- I've been arguing about the material, and you've been arguing with a bunch of ad-hominem attacks. That's no way to edit here at Wikipedia. Please adhere to the editing policies at Wikipedia, including reaching consensus before pushing a revert, and assuming good faith. I expect you to revert back to the previous version of the article as a first gesture of good-faith editing. We can discuss the material, and not your accusations of POV-pushing, afterwards. -Thucydides411 (talk) 14:52, 28 April 2014 (UTC)
Billbowler2, there's a difference between scientific surveys that estimate deaths due to violence, and body counts. This is a simple issue. -Darouet (talk) 22:13, 28 April 2014 (UTC)
- See my comment to you on the talk page on Casualties of the Iraq War page. I address there the arbitrary segregation you want, and some of what's wrong with it. Sure, there's a difference between "scientific surveys" and "body counts". And there's a difference between some "scientific surveys" and other "scientific surveys". And there's a difference between some "body counts" and other "body counts". And there's a difference between "scientific surveys" and "unscientific surveys". And there's a difference between any or all of these and other methods not mentioned so far. If I were dividing these sources up into camps I would use a different criteria than the one you're choosing, but I don't think it's appropriate to segregate sources here to begin with. There's a place for highlighting various differences of methodologies among the different sources. That is in the broader discussion of the sources that goes on down the page, and which it already does do there. The place is not in a set of bullet point references in a sidebar. Doing so there only serves to impose subjective biases.
- I also don't really understand your ranting comment about "experts" in your last edit. If it matters, I cite an expert in the informaworld link above. It's a good article discussing many of these sources in some detail, and which you may want to read if you're interested in this topic. I also cited another expert (van der Laan) in my comment to you on the other talk page. Here actually are some other good discussions with these same experts, from an Iraq-focused blog. You may also want to read these:
http://musingsoniraq.blogspot.com/2013/07/a-critique-of-lancet-reports-on-iraqi.html http://musingsoniraq.blogspot.com/2014/01/questioning-lancet-plos-and-other.html
- Of course, it's not possible for me or anyone else to agree with every comment of every "expert" that somebody might cherry pick and post here, particularly not on a topic that's been as disputed as this one. It's also not possible to have an unbiased criteria for who qualifies as an "expert". It's an inherently fuzzy and subjective label, open to disagreement, like "scientific survey".Billbowler2 (talk) 03:48, 29 April 2014 (UTC)
- Your criticism essentially comes down to two points:
- 1) You don't understand the difference between a survey and a body count, and
- 2) You point to sources criticizing the most robust survey (the one published in Lancet) and agree with them.
- I can't help you with the first point, but with the second, yes I know the report was criticized, but it was peer reviewed and has been defended as much as it was criticized. I suppose the true number of violent deaths resulting from this conflict is an inconvenient thing for some people. -Darouet (talk) 16:44, 29 April 2014 (UTC)
- Your criticism essentially comes down to two points:
- My criticism does not come down to either of those two points. First, a "body count" can be a "survey" and vice versa. But that isn't really the distinction you're clumsily trying to draw. You're talking about a particular kind of survey, which you don't even understand yourself, routinely mislabeling these sources as "excess deaths" in your comments, when some of them are not, etc. Second, I point to sources that say, among other things, that the Lancet survey "is not science", and dispute it precisely on the grounds that it is *not scientific*, yet you want the introduction to arbitrarily label it "scientific". Regardless of what your intent might be, you are engaged in POV pushing.Billbowler2 (talk) 18:56, 29 April 2014 (UTC
- Well, it's hardly fair to go around accusing others of POV pushing when you're clearly doing the same. All things held equal, The John Hopkins study made it to a journal like Lancet only cause it was air tight and fringe claims of fraud from a single paper on quasi journal like Routledge notwithstanding, the validity of the has never been in doubt. Lancet's and editors consider the methodology valid and had no problem with carrying out subsequent studies, nor have I seen any major medical organization repudiate it. You're referring to science published and upheld in one of the world's formost medical journals as an unscientific fraud because you don't like what it says so please have the honesty not to call everyone who finds that odd a pov troll.Freepsbane (talk) 16:05, 30 April 2014 (UTC)
- Freep, the Lancet study has been rejected in numerous peer-reviewed articles and more widely elsewhere. There is of course dipsute on the matter however, as evidenced by all the differing views cited on the page dedicated to that report. If you think something making it into publication in the Lancet means it is "air tight", you are living in a fantasy land (see eg Wakefield debacle). Also, your bogus claims of "quasi journals" aside, that is far from the only academic source to have rejected the Lancet study. The IFHS in the NEJM also rejected it, among others. But this isn't the point here. I'm not suggesting Lancet study shouldn't be cited because it was a fraud. There are other views on that question, and it's been reported in many sources so should be mentioned. Thucydides has been the one trying to blank out material he doesn't agree with.Billbowler2 (talk) 17:01, 30 April 2014 (UTC)
- Billbowler2, you're putting words into Darouet's mouth. I was the one who said that IFHS implies 400,000 excess deaths. IFHS did not cite this figure, but rather cited the figure of 150,000 violent deaths. But if you take the death rate they compute after the invasion, as people have done, it implies a number of excess deaths which is consistent with the uncertainties in the Lancet survey. But this entire argument is besides the point, because we're not citing 400,000 excess deaths for IFHS, but rather the 150,000 violent deaths that IFHS stated. I only made this point because you were saying that IFHS and Lancet were wildly discrepant. I was pointing out that if you assume that IFHS' implied number of excess deaths is correct, but that they misattributed most excess deaths (a criticism that has been made of IFHS), then the two surveys are fairly consistent.
- The point we're discussing is whether or not we should provide headings for "Estimated violent deaths" (i.e., statistical surveys) and "Documented violent deaths" (i.e., body counts). This is pretty clear to me. Body counts do not measure the same thing as statistical surveys of mortality. Body counts produce lower limits, while statistical surveys produce estimates of the actual death count. It's much more informative to the reader to separate out these two types of numbers, which represent different things.
- The only remaining question is which heading to put first. I think we should put the heading that represents the total figure first (the statistical surveys), and to put the lower bounds second (the body counts). You may have a different rationale for showing the two headings in a different order, in which case you should explain that rationale. -Thucydides411 (talk) 16:20, 30 April 2014 (UTC)
- You're giving subjective interpretations here. And your interpretations are wrong in many ways. I think you're wrong that "IFHS implies 400,000 excess deaths". The survey data certainly doesn't imply that. You'd have to impose a lot of speculative assumptions on the data to get it to come out that high. And you're wrong that this would somehow mean it is not widely discrepant with Lancet. It would still be widely discrepant with Lancet, as the IFHS authors themselves said in response to a similar claim. And IFHS is not the only survey that's so discrepant from Lancet. ILCS and PLOS are likewise wildely discrepant from the Lancet one, but consistent with IFHS. But I don't think any of this is really relevant here. It is not true that "surveys" and "body counts" measure different things, though some do. And it is not true that "body counts produce lower limits". That depends entirely on the "body count" in question and what methods it is using and the reliability of the underlying data being used. Some "body counts" could produce gross overestimates, depending on what methods they're using. Additionally, there are other methods that have been used in wars that don't necessarily fit into these camps. Your headings are also subjective, as many of the "body counts" are also producing "estimated violent deaths" according to many sources. Do I need to list countless sources here calling things like Iraq Body Count "estimates" (including even the Lancet study)? You're using a particular idiosyncratic definition of "estimated" here that is not universally accepted or used by many other cited sources. I maintain that your whole premise here is inherently biased, subjective and inappropriate.Billbowler2 (talk) 17:01, 30 April 2014 (UTC)
- Well, it's hardly fair to go around accusing others of POV pushing when you're clearly doing the same. All things held equal, The John Hopkins study made it to a journal like Lancet only cause it was air tight and fringe claims of fraud from a single paper on quasi journal like Routledge notwithstanding, the validity of the has never been in doubt. Lancet's and editors consider the methodology valid and had no problem with carrying out subsequent studies, nor have I seen any major medical organization repudiate it. You're referring to science published and upheld in one of the world's formost medical journals as an unscientific fraud because you don't like what it says so please have the honesty not to call everyone who finds that odd a pov troll.Freepsbane (talk) 16:05, 30 April 2014 (UTC)
- My criticism does not come down to either of those two points. First, a "body count" can be a "survey" and vice versa. But that isn't really the distinction you're clumsily trying to draw. You're talking about a particular kind of survey, which you don't even understand yourself, routinely mislabeling these sources as "excess deaths" in your comments, when some of them are not, etc. Second, I point to sources that say, among other things, that the Lancet survey "is not science", and dispute it precisely on the grounds that it is *not scientific*, yet you want the introduction to arbitrarily label it "scientific". Regardless of what your intent might be, you are engaged in POV pushing.Billbowler2 (talk) 18:56, 29 April 2014 (UTC
- I think Billbowler2 that the big misunderstanding is in the nature of these different kinds of measurements, and in the word "estimate," which it seems to me you are understanding in a colloquial sense. It's true that in everyday speech we "estimate" values in all kinds of ways. But in this case, "estimate" is referring to a procedure that samples a representative portion of a large population randomly, and extrapolates from that sample to the entire population. That procedure is wholly different from one that "counts" deaths that are reported in newspapers or war logs. In this sense, "count" and "estimate" mean two totally different things. Is that clear? -Darouet (talk) 17:26, 30 April 2014 (UTC)
- So the "sense" you mean is a particular idiosyncratic usage of a word ("estimate") that is used and understood differently by others. What I understand is that there are differences in the methods used between sources, in a variety of ways. Those differences are best handled in the discussion, not with arbitrary labels imposed over bullet point references. That requires arbitrary editing based on editorial whim and POV, rather than citation of source material. It requires, among other things, the arbitrary assumptions that:
- 1) the cited sources should be segregated into separate categories
- 2) they should be separated into two categories (rather than three, four...)
- 3) they should be separated based on this particular methodological difference rather than any variety of other criteria
- ALL of those propositions are arbitrary and open to dispute. These choices wind up adopting an arbitrary POV and imposing that over the material no matter how you slice it. Additionally, what happens when there is a source that doesn't fit into your two categories, "samples a representative portion" or "count"? The difference you discuss is handled in the discussion of the sources. There is no basis for labeling in the intro other than POV pushing. So I say no to apartheid in the introductions. You should too.Billbowler2 (talk) 18:20, 30 April 2014 (UTC)
What other categories, in your view, would represent reasonable ways of distinguishing between major method collection and data types? Right now, we have a "body count" and "survey" type that three editors seem to understand and want to make clear to readers. What other types do you propose? -Darouet (talk) 18:27, 30 April 2014 (UTC)
- I proposed some alternatives for the sake of argument elsewhere, but I don't propose any other "types". That requires that I accept the first arbitrary assumption that I noted above: "1) the cited sources should be segregated into separate categories". I don't accept that arbitrary premise. And I certainly do not think bullet-point introductory references are the place to attempt to distinguish methodological differences. That topic is complex and for the discussion areas, not for an arbitrarily selected "label" in the introductions.Billbowler2 (talk) 19:06, 30 April 2014 (UTC)
- Also, take a look at the "Strength" and "Casualties and losses" sections just above the one in question here. There's all kinds of numbers there from a variety of sources, yet nobody has segregated them by methodology "type". Don't you have to separate all of those too? Which ones fall into which "type"? I think a lot of them don't fall into either of your two "types", but can't be sure since the precise meaning of your labels is arbitrary and fuzzy.Billbowler2 (talk) 20:25, 30 April 2014 (UTC)
- The point is, you complain that the division between scientific surveys and body counts is arbitrary, and state that any number of methodological differences could be used to separate sources. When I ask you which kinds of other methodological differences you refer to, you can't name one.
- Fundamentally, again, this comes down to your apparent inability to understand the fundamental difference between a scientific survey and a body count. You call this difference "arbitrary," "imposed" and write above that there could be "three, four… any variety of other criteria," but can't name a single one. -Darouet (talk) 20:40, 30 April 2014 (UTC)
- I notice you evaded my questions. Please answer them.
- I believe I understand these sources and their differences better than you do, as evidenced by your earlier mistaken claims about what they're measuring. However, I don't hold the same view as you about what qualifies as a "scientific survey", and neither do some of the cited sources. That's the problem with arbitrary labeling. Labels like "scientific" and "estimated" that you keep trying to impose and use to segregate sources are defined and used differently than you by cited sources, and in some cases your usage is directly disputed by cited sources, which means you're taking a POV. You want me to "name" another arbitrary criteria to use to segregate the sources, but I don't accept the premise that they should be segregated here in the first place. On another page i gave you some alternative criteria, just for the sake of argument, based on methodological transparency and verifiability. And it was, "can you actually understand from the published material how each source is collecting its data and constructing its numbers, and can you verify that the methodology claimed by each source was actually used by that source?" That creates two camps, on arguably much more meaningful grounds. But i don't accept the premise that the sources should be segregated here in the first place, either with that arbitrary criteria or with your arbitrary criteria.
- Now, answer my questions. Which categories should each of the numbers listed in the 'Strengths' and 'Casualties and losses' sections be placed into? Surely we need to segregate and label those too. Right?Billbowler2 (talk) 21:20, 30 April 2014 (UTC)
- I'm not an expert on how unit/army strengths and casualties were measured, and won't weigh in above other editors there. -Darouet (talk) 21:58, 30 April 2014 (UTC)
- What are you an expert on here? Seems like a convenient evasion. All of those numbers from strengths and casualties among Iraq security forces, militias, etc, come from differing methods. And there's not a word here in this sidebar about what their methods are, or how they differ from each other. And nobody has tried to create two categories to segregate their methods into. In addition to not "weighing in" above whoever edited the rest of the sidebar, maybe you should take a cue from and defer to them in knowing that this is not the place to go into methodological differences or to impose arbitrary labels based on your interpretation of those differences. I imagine they understood the problems inherent in trying to editorialize in that way.Billbowler2 (talk) 22:32, 30 April 2014 (UTC)
- I'm not an expert on how unit/army strengths and casualties were measured, and won't weigh in above other editors there. -Darouet (talk) 21:58, 30 April 2014 (UTC)
- Forgive me, but I still haven't seen anything convincing about the John Hopkins study being repudiated. If anything from what I've read the methodolgy is widely accepted and very similar to the sort used for determining figures for cigarette associated morbidity and the like. With a few exceptions (Routledge is certainly no Lancet) most of the criticism associated of the study is coming from political groups and bloggers, not peer reviewed journals. If it's scientific validity were legitimately in question, then you'd have a situation like Regnerus where multiple journals, universities and organizations like the AMA, ASA and APA all say the data doesn't match the claimed conclusions, instead those claims about the Hopkins study are largely confined to blogs and editorials. You can't legitimately say the scientific validly of a study accepted in one of the foremost Journals has been discredited till an overwhelming majority of scientific authorities say so. Something like that would look like the fracas with the NFS study and that certainly hasn't happened. Freepsbane (talk) 21:57, 30 April 2014 (UTC)
- Freep, it isn't really the point here whether the Lancet study is credible or not, but if you're interested the Lancet study has been rejected by something like seven or eight different peer-reviewed papers, including the IFHS and the other I already cited. Its lead author was also censured publicly by AAPOR for failing to follow fundamental standards of science, in failing to disclose basic information about how the survey was conducted that is necessary to evaluate its scientific merit. The ASA, which you mention above, came out in support of AAPOR on that. But again, this isn't really the point. I'm not trying to have the Lancet study removed or have the page say anything about it that isn't already there. I'm disputing an attempt to impose purely editorialized and disputed labels that segregate sources in arbitrary ways and inherently adopt and impose a POV on what should be impartial introductory references, just like the references in all the other sections of the sidebars here. This editorializing amounts to POV pushing. Its fuzzy labels and (unstated) definitions are disputed by published sources.Billbowler2 (talk) 22:32, 30 April 2014 (UTC)
- In response to Billbowler2's repeated assertion that separating out body counts from statistical surveys is POV-pushing, I'll leave this citation from the recent PLoS Medicine paper on war-related mortality in Iraq (Hagopian et al. 2013):
- "The gold standard for measuring conflict-related mortality is prospective active surveillance, with real-time data collection of mortality events as they occur [62,63]. International initiatives to commence these methods prior to the outbreak of war have been recommended [64], and could be initiated now for the several anticipated or emerging armed conflicts. Failing that, retrospective surveys are the next best approach, despite their shortcomings (which include delays in analysis and reporting, large confidence intervals, lack of good baseline data for comparison purposes, and the inability to capture varying results by sub-region using feasible sample sizes). Body counts based on passive surveillance are the least reliable of methods [62]."
- This isn't the only section in the paper where the robustness of surveys, in comparison to body counts, is discussed. Statistical surveys are considered a more reliable method of determining war-related deaths, and it is acknowledged that body counts, like IBC, are lower limits (for example, Hagopian et al. remark that, "In contrast to IFHS, we skipped only one cluster for security reasons, and did not substitute Iraq Body Count data, which we know underrepresent death rates."). -Thucydides411 (talk) 22:40, 30 April 2014 (UTC)
- I accept that the authors of the PLOS survey (which include authors of the Lancet survey) hold the POV quoted above, and that this POV is a fairly good example of the POV you are pushing. And almost everything asserted there is disputed by other sources with a different POV.
- And you are still wrong about "body counts" being "lower limits". That depends on the "body count". Even if Iraq Body Count constitutes a "lower limit", as the PLOS authors assert that they "know", that still does not mean any "body count" does, as you have falsely been asserting.Billbowler2 (talk) 23:04, 30 April 2014 (UTC)
- Can we please drop the "POV pushing" acusassions? I don't think any of us here yourself, and myself included are completely innocent of it and tarring people with a pov different than yourself does nothing to advance this discussion. Complaints about a hidden agenda notwithstanding, I don't see what's wrong with a PLoS paper defending the validity of that methodology or why Lancet isn't an acceptable authority on a subject like this.Freepsbane (talk) 23:29, 30 April 2014 (UTC)
- In response to Billbowler2's repeated assertion that separating out body counts from statistical surveys is POV-pushing, I'll leave this citation from the recent PLoS Medicine paper on war-related mortality in Iraq (Hagopian et al. 2013):
Billbowler2 has shown that the proposed labels are contradicted by various sources. The IBC has been labeled "Estimated violent deaths" by many sources, and I'm not sure how well "documented" applies to the sources lumped in there either, or what exactly that is supposed to mean. I doubt readers have any good way to know what these labels are supposed to really mean anyway unless they read further about the cited sources. It should really be pointed out here that nobody seems to be trying to add citable sources or new source material to the page, but rather are simply engaged in editorializing over top of a bit of existing material and trying to impose a particular editorial spin on a group of introductory references, and then deleting some of those references besides. None of the many different numbers given in this sidebar go into issues of methodology or how the sources differ in their methods and there's no good reason for it to so just in this instance. It does indeed look like just POV pushing, especially given that the three editors here who seem to want these dubious edits are all bringing up their opinion that the Lancet study is the "best", "most robust", and so forth, which shouldn't be relevant and is clearly something that is in dispute among reliable sources.Marytheo45 (talk) 23:49, 30 April 2014 (UTC)
- And what would your mystery "reliable sources" be? It's customary to use peer reviewed studies from well established journals as your primary sources. The Hopkins study has been upheld by it's journal, and there's plenty of medical and academic sources saying that the sort of methodology it used is viable. Rubbing out a peer reviewed source when it's exactly the kind of source wp guidlines says we're supposed to favor just because you don't like what it says is engaging in POV pushing just as much as those horrible, terrible editors you rail against.Freepsbane (talk) 23:58, 30 April 2014 (UTC)
- Freep, nobody is "rubbing out" the Lancet study here, so I'm not sure what you are talking about. It's cited in the version I've reverted to, and in all other versions. The only edit that rubs out certain sources is the one you seem to want. Again, it's not the issue whether the Lancet study is credible or not, but here's a peer reviewed study arguing that the methodology it used is not viable: http://jpr.sagepub.com/content/45/5/653.abstract . It is straightforwardly true that the Lancet study has been widely disputed by reliable sources (RS), and you can see many more such references on the page devoted to that study.Billbowler2 (talk) 00:26, 1 May 2014 (UTC)
Since User:Billbowler2 has been accusing me and others of making arbitrary distinctions between methodologies, I'm going to show here that the distinction between statistical studies of mortality and body counts is not one that I thought up or arbitrarily chose, but one which is widely recognized, and is of fundamental importance in understanding the various death tolls that have been compiled for the Iraq War. I'll cite a number of papers that discuss the difference between statistical studies and body counts explicitly. It's not hard to find these quotes. I didn't have to trawl through articles selectively choosing quotes. The first few articles I called up all discussed the difference between the two methodologies, and how important the difference is. Here are the quotes, with links to the articles:
- [Tapp et al. (2008), "Iraq War mortality estimates: A systematic review," in Conflict and Health (doi:10.1186/1752-1505-2-1) http://www.conflictandhealth.com/content/2/1/1]:
- "The 13 studies that we included are separated into two general categories: population-based studies and passive reporting."
- "Below, we summarize the key details from each study, separated according to population-based studies and passive reporting, and whether the data was published or unpublished."
- "The two broad classes of data collection methods, population-based and passive reporting, partly explain the variance in the estimates (See Appendix 2) [27]. The population-based methods are well established and a generally accepted method within the fields of epidemiology [27,33]. Studies using a population-based method are more sensitive for estimating mortality, by identifying non-reported deaths."
- "Compilation from primary sources or passive reporting methods, that rely upon media and/or official sources for mortality information are likely to be more specific, however, would be expected to considerably underestimate true mortality by not capturing unreported deaths and indirect deaths, from non-violent effects of war, for example, that are not often attributed to the ongoing conflict [35]."
- "Of the population-based studies, the Roberts and Burnham studies provided the most rigorous methodology as their primary outcome was mortality [16,18]. Their methodology is similar to the consensus methods of the SMART initiative, a series of methodological recommendations for conducting research in humanitarian emergencies [33]. Another population-based study, the Iraq Living Conditions Survey, reported lower death estimates that we assume is due to the survey being conducted barely a year into the conflict, a higher baseline mortality expectation, and differing responses to mortality when houses were revisited [21]. However, not surprisingly their studies have been roundly criticized given the political consequences of their findings and the inherent security and political problems of conducting this type of research [36,37]. Some of these criticisms refer to the type of sampling, duration of interviews, the potential for reporting bias, the reliability of its pre-war estimates, and a lack of reproducibility. The study authors have acknowledged their study limitations and responded to these criticisms in detail elsewhere [38]. They now also provide their data for reanalysis to qualified groups for further review, if requested."
- "Of the passive surveillance studies the IBC study was, until recently, the most frequently cited by media sources and coalition force politicians [26]. The IBC was largely established as an activist response to US refusals to conduct mortality counts. This account, however, is problematic as it relies solely on news reports that would likely considerably underestimate the total mortality."
- Tapp et al. (2008), as review of different attempts to determine the number of deaths due to the Iraq War, separates out its results into two broad categories, which is calls "passive reporting" (i.e., body counts relying on media reports and other individual tallies) and "population-based studies" (i.e., statistical studies of the mortality rate in the population due to different causes before and during the war). They consider the latter to be the most reliable method, and say that passive reporting produces underestimates. They categorize the Lancet studies, ILCS, IFHS and ORB as population-based surveys, and single out the Lancet surveys as the most methodologically sound. They categorize IBC as passive reporting, and say that it is likely a "considerable underestimate."
- [Carpenter et al. (2013), "WikiLeaks and Iraq Body Count: the sum of parts may not add up to the whole-a comparison of two tallies of Iraqi civilian deaths," in Prehospital and Disaster Medicine (doi: 10.1017/S1049023X13000113) http://www.ncbi.nlm.nih.gov/pubmed/23388622]:
- "Passive surveillance systems, widely seen as incomplete, may also be selective in the types of events detected in times of armed conflict. Bombings and other events during which many people are killed, and events in less violent areas, appear to be detected far more often, creating a skewed image of the mortality profile in Iraq. Members of the press and researchers should be hesitant to draw conclusions about the nature or extent of violence from passive surveillance systems of low or unknown sensitivity."
- Carpenter et al. (2013) attempts to determine what fraction of reports in the Iraq War Logs are found in IBC. They find that most reports with small numbers of casualties were not picked up by IBC, while most reports with large numbers of casualties were picked up by IBC. In other words, IBC misses most incidents in which only a few people are killed, and is skewed heavily towards large attacks that grab a lot of press attention. Carpenter et al. (2013) notes that passive surveillance counts (i.e., body counts) are widely viewed as being underestimates.
This isn't a distinction I invented. It's a distinction that is widely discussed in the literature on studies of war-related deaths. -Thucydides411 (talk) 22:57, 5 May 2014 (UTC)
- Your "distinction" keeps moving around and keeps getting different labels applied from one page to the next and one comment to the next. You've managed to cherry-pick a couple obscure papers by authors of the Lancet survey and their friends that segregate and label sources for their analysis, but even they are not using the same labels you are trying to impose here. In the "Tapp et al" paper, they use the label "passive reporting", a label which Tapp et al invented and which does not have any established definition anywhere outside their paper. In their paper it apparently means "anything that isn't a cluster sample survey", though they don't provide any coherent definition. The "Carpenter et al" paper, co-authored by one of the Lancet study authors, uses a similar but different label, "passive surveillance". That term actually exists in medical literature, but their usage does not match any of those existing definitions, and their unusual usage has been disputed in other papers and elsewhere (See footnote 44 here: www.tandfonline.com/doi/abs/10.1080/10242690802496898). The other broader conclusions you describe above are also disputed by other sources.
- But then you are using different labels than either of those again, using instead "estimated" and "documented", which have already been shown to conflict with the usage of many cited sources. In addition to arbitrary labeling you are also engaging in Hasty generalization. It is not true that "body counts" are "underestimates" as you keep asserting. Whether that is true of any given "body count" depends on the sources and methods used in that particular case.
- You should also stop accusing others of failing to gain consensus when you're the one who is trying to make changes that lack consensus.Billbowler2 (talk) 00:05, 6 May 2014 (UTC)
- My distinction isn't moving around. It's remained exactly the same: there are counts, and there are statistical surveys. People refer to these concepts by multiple names, but they describe the same distinction. Some death tolls are compiled from media reports and government counts (like the Iraqi Health Ministry tally), and some are based on surveys that try to determine mortality from a random sample drawn from the population.
- Point of fact: Tapp et al. (2008) did not invent the label "passive reporting." I don't know what sources you're relying on (or if you're relying on sources) to make this claim. I am able to find sources that use nearly identical language previously, such as the 2nd Lancet study:
- "The US Department of Defence keeps some records of Iraqi deaths, despite initially denying that they did.4 Recently, Iraqi casualty data from the Multi-National Corps-Iraq (MNC-I) Significant Activities database were released.5 These data estimated the civilian casuality rate at 117 deaths per day between May, 2005, and June, 2006, on the basis of deaths that occurred in events to which the coalition responded. There also have been several surveys that assessed the burden of conflict on the population.6, 7 and 8 These surveys have predictably produced substantially higher estimates than the passive surveillance reports." -Burman et al. (2006), in The Lancet, Volume 368, Issue 9545, 21–27 October 2006, Pages 1421–1428 (doi: 10.1016/S0140-6736(06)69491-9, available here)
- A paper published at nearly the same time as Tapp et al. (2008), which would certainly have been under review as Tapp et al. (2008) came out, likewise uses "passive reporting":
- "From 1955 to 2002, data from the surveys indicated an estimated 5.4 million violent war deaths (95% confidence interval 3.0 to 8.7 million) in 13 countries, ranging from 7000 in the Democratic Republic of Congo to 3.8 million in Vietnam. From 1995 to 2002 survey data indicate 36 000 war deaths annually (16 000 to 71 000) in the 13 countries studied. Data from passive surveillance, however, indicated a figure of only a third of this. On the basis of the relation between world health survey data and passive reports, we estimate 378 000 globalwar deaths annually from 1985-94, the last years for which complete passive surveillance data were available." -Obermeyer et al. (2008), "Fifty years of violent war deaths from Vietnam to Bosnia: analysis of data from the world health survey programme," in the British Medical Journal (doi: 10.1136/bmj.a137, available here)
- But this is arguing over semantics. The substantive point is that these papers make the same distinction between mortality studies based on representative survey data, and counts compiled from media reports and similar sources. There are many other papers which emphasize the same basic methodological distinction:
- Hagopian et al. (2013), "Mortality in Iraq Associated with the 2003–2011 War and Occupation: Findings from a National Cluster Sample Survey by the University Collaborative Iraq Mortality Study", in PLoS Medicine (doi: 10.1371/journal.pmed.1001533):
- "The gold standard for measuring conflict-related mortality is prospective active surveillance, with real-time data collection of mortality events as they occur [62],[63]. International initiatives to commence these methods prior to the outbreak of war have been recommended [64], and could be initiated now for the several anticipated or emerging armed conflicts. Failing that, retrospective surveys are the next best approach, despite their shortcomings (which include delays in analysis and reporting, large confidence intervals, lack of good baseline data for comparison purposes, and the inability to capture varying results by sub-region using feasible sample sizes). Body counts based on passive surveillance are the least reliable of methods [62]."
- Rawaf (2013), "The 2003 Iraq War and Avoidable Death Toll," a Perspective in PLoS Medicine (doi: 10.1371/journal.pmed.1001532):
- "The gold standard, mortality surveillance (prospective death reporting), captures real-time data. But with few notable exceptions, national registration systems are barely functional during wartime [6]. Instead, surveys, in the form of retrospective data collection, are often used, although this method has known shortcomings when collecting adult mortality data [7]–[9]. A third option is demographic techniques that compare the age distribution of a population from a census taken both before and after war [10]. However, reliable census data are rarely available and often are disputed politically—and Iraq is no exception [11]. Finally, passive collection of death reports (media, eyewitness accounts, records from health facilities, and national government and international reports) is a method developed in recent years, but is widely criticised by researchers as the least reliable method of ascertaining mortality during conflict [12]–[14]."
- Hagopian et al. (2013), "Mortality in Iraq Associated with the 2003–2011 War and Occupation: Findings from a National Cluster Sample Survey by the University Collaborative Iraq Mortality Study", in PLoS Medicine (doi: 10.1371/journal.pmed.1001533):
- I've established clearly enough that the distinction between body counts and estimates based on statistical surveys is one that is made by many highly cited journal articles. It's not enough to just say that some people have criticized the Lancet surveys, because that completely misses the point. Whether or not some researchers consider a particular study faulty (while other researchers consider it the best study available), there is broad agreement that body counts produce underestimates, and that statistical studies of mortality are a more robust method of estimating the number of war-related deaths.
- Next time you respond, do so with specific citations from journals, disputing this particular point. If you can do so, we can begin to weigh the sources against each other, to find where the balance lies. So far, we have a whole host of sources that delineate between body counts and survey-based methodologies, but only criticisms of particular studies on the other side. You haven't presented anything to show that researchers disagree with the distinction I'm pointing out, only that some people criticize certain surveys. -Thucydides411 (talk) 04:05, 6 May 2014 (UTC)
- You claim your distinction is not moving, yet the labels change from one post to the next and one quotation to the next. None of the above sources use your labels, but all you are trying to do here is impose your particular labels. All of your citations above are quotations from Lancet study authors and their friends and relatives. This does not establish any "broad agreement", and most of the claims above are disputed by other sources. Moreover, some of the claims simply don't make any sense.
- Apparently the terms "body count", "documented", "passive surveillance", "passive reporting", "passive collection" are all supposed to be interchangeable synonymous. This seems to be what you're saying. So, in your Rawaf citation, we're supposed to believe that the "body count" (supposedly a synonym with "passive collection" here) is a "method developed in recent years"? That claim seems idiotic. People have been doing "body counts" since long before anyone was doing cluster sample surveys. Show me some wars prior to maybe the 1990's that its death toll has been established by one of your "statistical surveys". I don't think there are any. I could find plenty of "body counts" though.
- There is also no "broad agreement that body counts produce underestimates" or that "statistical studies of mortality are a more robust method". Both conclusions are Hasty generalizations. That is why, for example, Mark van der Laan (previously cited, among others) rejects the Lancet study as worthless and accepts IBC as credible, because the value of any given source is not determined by which of your two crudely generalized methodological categories it falls into. A "statistical survey" can be worthless crap and a "body count" can be accurate and reliable, or vice versa. It all depends on the particular "statistical survey" or the particular "body count". The merit is not determined by lumping them into two crudely generalized camps. That is why I've cited pieces (including from journals) that do things like reject the Lancet study as not credible, while accepting things like IBC as credible. They demonstrate that the kind of crude generalization about methodology "type" that you're making does not determine the merit of any given source. Nor do such generalizations determine whether any given source is an underestimate or overestimate, as you keep asserting.
- As an example, there are some "body counts" listed on the Iraq casualties page that are more broadly agreed to be overestimates than underestimates: https://en.wikipedia.org/wiki/Casualties_of_the_Iraq_War#Iraqiyun_estimate and https://en.wikipedia.org/wiki/Casualties_of_the_Iraq_War#People.27s_Kifah .
- I think these would fit into your "body count" camp. But I doubt that many people consider them underestimates or "lower limits". And if you compare them to some of the sample surveys, those "body counts" are actually higher than most of them, or even all of them in the latter case. But this couldn't be true if what you keep saying were true. According to you, we should be able to just put them into camp "body count" and then we could supposedly conclude that they are "lower limits".
- More broadly, there's been a lot of discussion of death estimates in Syria. See here: https://en.wikipedia.org/wiki/Casualties_of_the_Syrian_Civil_War
- It seems like all of the varying sources there would fit into your "body count" camp. I'm not aware of any "broad agreement" that all these sources are underestimates or "lower limits". Quite the opposite. Here's a report that looks into many of these sources, http://www.oxfordresearchgroup.org.uk/publications/briefing_papers_and_reports/stolen_futures . It stresses caution in interpreting the figures and notes there is no certainty about whether these various "body counts" are too high or too low:
- "Nonetheless, simple totals throughout this study and elsewhere should be treated with caution and be considered provisional: briefly put, it is too soon (and outside the scope of this study) to say whether they are too high or too low." ... "It cannot be stated with certainty at this time whether these numbers should be considered too low or, owing to deficiencies in the original data or our merge process, too high. They certainly should not be taken as exact, definitive, or without scope for improvement."
- According to what you keep asserting, that kind of conclusion should be unthinkable. All you have to do is say those Syria methodologies are "body counts", or "passive reporting" or whatever other dubious label, and then supposedly they have to be "underestimates" or "lower limits". But they aren't. Reality just refuses to cooperate with your Hasty generalizations.
- More to the point, you are not engaged in any methodological discussion or analysis in your attempted edits here. You are just trying to slap a label onto a few bullet-point references in a sidebar that otherwise does not mention methodology anywhere. And your labels aren't even the same as the various labels you're quoting above. I do not argue that people haven't made distinctions sort of like yours in different contexts involving methodological discussions or broader evaluations. Of course methodologies differ. The problem here is that this sidebar is not a methodology discussion and the labels you're trying to impose here (not to mention your other conclusions) are contradicted by many cited sources, as I have already shown.Billbowler2 (talk) 06:36, 6 May 2014 (UTC)
- "You claim your distinction is not moving, yet the labels change from one post to the next and one quotation to the next." Do you mean that the distinction is changing, or that different papers use slightly different phrasing when referring to the distinction? I've explained exactly what the distinction is (counts of reported deaths versus statistical surveys), and numerous (perhaps most) papers on war deaths discuss this difference. The one review article we have here, Tapp et al. (2008), even separates out the different studies along exactly this basis.
- "All of your citations above are quotations from Lancet study authors and their friends and relatives." This is an amazing statement. Care to back it up?
- "This does not establish any "broad agreement", and most of the claims above are disputed by other sources." Care to cite any sources? So far, all you have is a couple blog posts attacking a particular mortality study, and one journal article criticizing the ethics of that same survey. You don't have any journal articles that dispute that statistical methods are superior to body counts in determining war deaths, or that dispute that the distinction between the two methodologies is significant. The Lancet surveys have been both attacked and held up as the best studies on Iraq war deaths, but the argument isn't over whether the Lancet surveys are the best study. You're disputing that the distinction between survey-based methods and body counts is one made by experts in the field.
- "Moreover, some of the claims simply don't make any sense." If you disagree with the scientists who write on wartime mortality, that's your right as a free person. If you're arguing here, I'm going to require something more than your opinion, though. Cite journal articles that call the articles I'm citing nonsense, and show that they're widely considered nonsense by experts. That's the standard we go by, not your personal opinion of what makes sense.
- I've been citing paper after paper here, with detailed quotations that include context, and all you post in response is blog posts, or quotations from think tanks (e.g., Oxford Research Group) with little indication they have any relevance to estimates of war-related deaths in Iraq. Get serious and start citing journal sources, and give enough context to make it clear that they actually agree with what you're saying. More long posts elaborating your opinions aren't helpful, and they're frankly just distracting from the goal here, which is to find out whether experts consider the distinction between body counts and survey-based mortality estimates important. I've posted enough highly cited journal articles emphasizing this distinction that its importance should be clear by now. -Thucydides411 (talk) 15:18, 6 May 2014 (UTC)
- Would have to concur, just by the sources Thucydides dropped, it's clear that as far as Journal sources and stuff we can sight go, his position is clearly the consensus amongst experts. I don't see any reason why we shouldn't include the version of edits he advocates when that's what the sources, partisan blogs and think tanks notwithstanding say.Freepsbane (talk) 21:31, 18 May 2014 (UTC)
- The sources Thucydides is cherry-picking are all associated with the authors of the Lancet study and push their POV, which is disputed by other sources. Nothing wrong with sources having a POV, but it must be presented as such with attribution, and in appropriate context. Morever, as previously noted, those cherry-picked sources do not even use the labels you want to impose here anyway. They each use different, poorly defined labels and those labels - and most certainly the conclusions they draw from them - are themselves disputed by other sources. The labels you're tyring to impose here are even contradicted by sources associated with the Lancet study authors whose POV you guys are clumsily trying to push.:
- - Lancet 2004 survey - "the estimate of coalition casualties from http://www.iraqbodycount.net is a third to a tenth the estimate reported in this survey"
- - Lancet 2006 survey - "The best known is the Iraq Body Count, which estimated that, up to September 26, 2006, between 43 491 and 48 283 Iraqis have been killed since the invasion."
- Not to mention other sources not associated with Lancet study authors:
- Iraq Family Health Survey (NEJM) - "Estimates of the death toll in Iraq from the time of the U.S.-led invasion in March 2003 until June 2006 have ranged from 47,668 (from the Iraq Body Count)"
- I could cite many further sources in journals, news reports, elsewhere that contradict your arbitrary labeling. You guys are editorializing, trying to present a disputed POV on the page without attribution. There have also been two editors here already noting and opposing this, including me. Three editors who want to POV-push and two editors who say they should not is not a "consensus" for the POV-pushing. And a couple cherry-picked journal sources that are all associated with the same set of authors is not a "consensus" either. And nor are "journal sources" the only relevant sources for WP in the first place. And not only are you editorializing, you are also blanking out actual cited source material at the same time. Yet you just keep pushing the same edits over and over again.Billbowler2 (talk) 21:03, 19 May 2014 (UTC)
- The "Second" editor opposing the lancet studies is a completely new user with no other edits other than one "discussion" post consisting of nothing more than tarring all the editors who took time to actually discuss this point of and a trio of curiously placed reverts pertaining solely to this dispute. You can't claim that you have a consensus in your favor. Furthermore, this thread's aspersions nonwithstanding, all those journal sources supporting the Lancet study are peer reviewed, accepted and published by well regarded journals with a well accepted public health methodology. They have more validity than a handful of political blogs, think tanks and news articles.Freepsbane (talk) 18:41, 20 May 2014 (UTC)
- Interesting that the above editor equates opposing his edit with "opposing the lancet studies". Need more be said? The previous version to which I and another editor have reverted neither promotes nor opposes the Lancet study. It simply lists the main numbers of some widely cited sources in a neutral way. But it is precisely this neutrality that bothers them and which they seek to remedy with editorializing.
- Also, not that this should matter, but you seem to have a very misinformed view of the literature and debate surrounding the Lancet study. First of all "think tanks and news articles" are entirely appropriate sources for WP. And where these have a POV, they need be represented as well, in appropriate context. They can not simply be discounted because they aren't "peer reviewed". Second, sources from reviewed journals have been varied in their judgments of the Lancet study. The IFHS, which i cited above, was from the New England Journal of Medicine, and it rejects the Lancet study. Numerous other peer-reviewed papers have rejected the Lancet study. At this point I think there are more peer reviewed papers rejecting the Lancet study than there are supporting it. So your apparent premise that there is some kind of consensus for the Lancet study among "peer reviewed" sources, while the only places it is disputed is blogs and such, is simply wrong. But all of this just reinforces my view that this whole attempt to push the same edit over and over again is about three guys trying to clumsily POV-push for the Lancet study.
- Lastly, I have never claimed to have a "consensus". I'm not the one trying to insert new changes here. I've said that your proposed changes do not have consensus and contradict numerous citeable sources. You are the ones falsely claiming to have a consensus.Billbowler2 (talk) 22:20, 20 May 2014 (UTC)
- Nobody is trying to "push" the Lancet mortality studies. Several editors here, recognizing that the scientific literature almost universally considers the difference in methodology between studying mortality statistically and counting reported deaths to be fundamental, think that this distinction should be made when presenting different casualty estimates. Note that both the 2nd Lancet study and IFHS are listed right now (as an aside: we should really add Hagopian et al. 2013 to the list). We don't cite the 2nd Lancet study as the Gospel. We simply take into account the distinction between studies that use statistical means to try to estimate the total number of war related deaths and counts that tally reported deaths. As the scientific literature notes, the former methodology, for all it's difficulty, is considered superior to the latter in determining the total number of war deaths, while the latter methodology produces undercounts. This isn't an issue, specifically, of the Lancet studies vs. Iraq Body Count. As the one review article we have on the topic (Tapp et al. 2008) emphasizes, it's a difference between two different methodologies. If you think this distinction is editorializing, then take it up with the scientists who measure war deaths, but don't think that the Wikipedia editors here in favor of following the scientific literature are the ones who are editorializing. -Thucydides411 (talk) 04:53, 21 May 2014 (UTC)
- You aren't following the scientific literature. First, you are imposing labels using terminology that is contradicted by usage in scientific literature, as I have shown above, and repeatedly elsewhere. Second, "scientific literature" is not the only relevant sources for WP. You can't just say that views expressed in a small handful of "scientific" articles is the only relevant viewpoint. Third, you are not even credibly evaluating the viewpoints in the scientific literature. You are just cherry picking articles that you interpret to serve the conclusions you want and interpreting them in ways you want. One of the small number of supposedly authoritative articles you cherry-picked is an article by Obermeryer et al, which sort of claims what you are claiming above (though it does not use the wordings you are tyring to use in this edit.) What you fail to note is that every "fundamental" conclusion of Obermeyer et al has already been refuted in the scientific literature.
- Obermeryer et al, who thank one of the Lancet authors for his contributions to their paper (every source you cite is connected with the Lancet authors in some way, like a game of 6 degrees of Kevin Bacon, but more like "0-1 Degrees of the Lancet Study"), were hoping to prove one of the Lancet authors' assertions about methodologies. So they went about creating some crude category they called "passive surveillance data". Then they claimed that the survey results were consistently higher than the "passive surveillance data" and this was because of "undercounts" in the latter. They claimed the "latter methodology produces undercounts" and the survey approach is "superior". So basically you buy into almost exactly this viewpoint and your editing here is trying to push this viewpoint.
- But what you fail to note is that the Obermeyer et al paper was refuted on all these counts. The following paper shows, among other things, that the so-called "passive surveillance data" were higher than the survey estimates in many cases. They had cherry-picked a biased sample of high survey estimates, while discarding low survey estimates from the comparison. There was no consistent relationship at all in terms of under/over-estimating between their (your?) two categories of methodology. Their data could not support the claim that the "passive surveillance data" under-counted at all, let alone in the amount they were claiming. And their data could not support their claim that their survey approach was in any way superior to the approach(es) they were criticizing. The rebuttal authors maintain that many of the sources used in the "passive surveillance data" are as likely to overestimate as underestimate and that their methodology is as good or better than the survey approach advanced by Obermeyer. You can see this rebuttal here: http://jcr.sagepub.com/cgi/content/abstract/53/6/934 , http://ns425.cawebhosting.com/docs/Publications/Additional-Publications/Spagat-Mack-JConflict-Resolution-EstimatingWarDeaths.pdf , http://www.hsrgroup.org/docs/Publications/Additional-Publications/Spagat-Mack-JConflict-Resolution-EstimatingWarDeaths-Appendix.pdf
- It's clear that your basis for this edit is that you have adopted the Obermeryer et al type conclusions almost exactly, and are trying to filter all reference to these Iraq sources though the lens of those conclusions before any reader has a chance to even read about them. But all of those conclusions are wrong and unfounded, as the article above demonstrates. At the very least, those viewpoints are directly disputed by other sources in the scientific literature that you are ignoring, which means you are cherry-picking and pushing a POV. In short, there is no scientific evidence that sample surveys are "superior" to any other methodology, or that any other methodology necessarily underestimates war deaths. That is just a free-floating prejudice of some proponents of survey methodology, a prejudice which Obermeyer et al tried, and failed miserably, to support with some kind of actual data.
- If I may try to help you out for future reference, no credible researcher would ever claim that "counts that tally reported deaths" are "undercounts" solely on the basis of fitting into that "type" of methodology. That is, at best, a Hasty generalization and they are either ignorant of the various methodological approaches that can fit into such a type and the associated problems that can arise, or they are propagandists who are pushing an agenda and trying to mislead you. In the meantime, stop trying to pass off disputed opinions or interpretations as fact. If you want to push the opinions of Obermeyer or the Lancet authors on the page, do so in appropriate context and with the relevant citations. This sidebar with bullet references is not the place for such POV pushing, or even the place for methodology discussion to begin with, as the lack of such discussion for any of the numbers elsewhere in the sidebar indicates.Billbowler2 (talk) 08:09, 21 May 2014 (UTC)
- Nobody is trying to "push" the Lancet mortality studies. Several editors here, recognizing that the scientific literature almost universally considers the difference in methodology between studying mortality statistically and counting reported deaths to be fundamental, think that this distinction should be made when presenting different casualty estimates. Note that both the 2nd Lancet study and IFHS are listed right now (as an aside: we should really add Hagopian et al. 2013 to the list). We don't cite the 2nd Lancet study as the Gospel. We simply take into account the distinction between studies that use statistical means to try to estimate the total number of war related deaths and counts that tally reported deaths. As the scientific literature notes, the former methodology, for all it's difficulty, is considered superior to the latter in determining the total number of war deaths, while the latter methodology produces undercounts. This isn't an issue, specifically, of the Lancet studies vs. Iraq Body Count. As the one review article we have on the topic (Tapp et al. 2008) emphasizes, it's a difference between two different methodologies. If you think this distinction is editorializing, then take it up with the scientists who measure war deaths, but don't think that the Wikipedia editors here in favor of following the scientific literature are the ones who are editorializing. -Thucydides411 (talk) 04:53, 21 May 2014 (UTC)
- First of all, you cite Spagat et al. (2009) in support of your position, which is that we should not separate out surveys based on methodology. But then, the third sentence of the Spagat et al. (2009) paper reads,
- "There are two main methodological approaches to estimating war deaths; one relies on collating and recording reports of war fatalities from a wide variety of sources, while the other relies on estimates derived from retrospective mortality surveys."
- Spagat et al. (2009) primarily criticizes Obermeyer's specific work, arguing that there are various flaws in the manner in which Obermeyer et al. (2008ab) combined different surveys, and in how they compared their figures to those compiled by the International Peace Research Institute, Oslo (PRIO). Spagat et al. (2009) does not make the point you'd like it to make, which is that there is no fundamental difference between statistical studies of mortality and tallies of reported deaths. Instead, Spagat et al. (2009) points out the problem of undercounting that arises with body counts:
- "With incident-based methodologies—as is well understood by all those involved with conflict data collection—there is a real risk that many deaths will go unreported, even though access to fatality data has improved dramatically since the advent of the Internet and powerful search engines.
- "The reasons are not difficult to discern. Governments may forbid reporting of war deaths—particularly of their own forces. Journalists and other observers are sometimes banned from war zones, as in Sri Lanka in 2009, or may stay away because conditions are simply too dangerous. And in wars with very high daily death tolls—like Iraq in 2005 and 2006—violent incidents with very small numbers of deaths may go unreported. 21 There is also a risk, although we believe it is a lesser one, that death tolls will be overreported.
- "Furthermore, report-based methodologies, no matter how accurate, can never determine the number of 'indirect' war deaths—that is, those arising from war-exacerbated disease and malnutrition."
- Since you've latched onto differences in terminology in order to argue that I'm muddling things, here's a translation: "report-based methodologies" = "incident-based methodologies" = what I've been calling body counts or tallies of reported deaths. Here you have Spagat using two different terms interchangeably, "report-based" and "incident-based methodologies." I hope we can put the criticism to rest that I'm conflating different terms.
- Spagat et al. (2009) says that there are difficulties with survey-based estimates, like individuals' imperfect ability to recall family events from the past. But the paper also says that surveys can measure total deaths due to war, while tallies of reported deaths cannot. The most important point that Spagat makes, however, is that "[t]here are two main methodological approaches to estimating war deaths; one relies on collating and recording reports of war fatalities from a wide variety of sources, while the other relies on estimates derived from retrospective mortality surveys." That's exactly my point. What's your point? That some people have criticized the Lancet studies? We all know that. We also know that other researchers consider the Lancet surveys the best work done on casualties of the Iraq War (I went through this above). But we're not putting the Lancet survey out there as the be-all-and-end-all of estimates. We list two survey-based estimates right now, and I think we should add Hagopian et al. (2013) to the list. I'm just in favor of making the distinction that every paper that's been quoted in this discussion so far says is important, between survey-based estimates and tallies of reported deaths. -Thucydides411 (talk) 19:23, 21 May 2014 (UTC)
- First of all, you cite Spagat et al. (2009) in support of your position, which is that we should not separate out surveys based on methodology. But then, the third sentence of the Spagat et al. (2009) paper reads,
This is unfortunately a long response because the issues and disagreements keep multiplying. I think that just as you've been cherry-picking articles before, you're now cherry picking lines from the Spagat-Mack article above. I'll try to take each point you raise. For simplicity's sake, I'll refer to the Spagat et al article as (SMCK), and follow their lead in referring to the Obermeyer et al article as (OMG).
You say the papers we've cited here make a distinction between your two types of methodology, so this justifies imposing your version of this distinction onto a set of bullet point references in an introductory sidebar that is not discussing methodologies anywhere. But the papers we've been citing are ones doing evaluations of methodology, and ones that are relevant to the discussion we are having here (about classifying different types). SMCK is responding to a paper whose main premise is making a two-sided dichotomy and drawing meaningful conclusions from that, and is thus responding in that framework (even though at least one of these two categories is very poorly defined, according to them). I could cite other papers that don't evaluate things in a dichotomous two-tier approach, but many of these might not be very relevant to this discussion. The other Spagat paper about the Lancet study that I referenced earlier actually evaluates many different sources, estimating deaths in Iraq and some other conflicts, but it does not do this by way of a two-tiered categorization scheme. Am I supposed to list every paper that mentions or discusses different war death estimates without using a two-tiered categorization scheme? Is this relevant? How many such sources do i have to list before you change your mind? I'm sure I could list quite a number of them, but I'm not sure what would be the point.
You keep saying that there is a "fundamental difference between statistical studies of mortality and tallies of reported deaths", and now you claim that SMCK supports this and that i'm supposedly saying there isn't. But I consider this a kind of weasel-wording on your part. There are fundamental differences *of methodology*. That is one thing. Another thing is whether there is a fundamental difference of *outcomes* based on the two "types". Is one type consistently more right or wrong, do their outcomes differ in a consistent way, etc. That has been your main premise. It was also OMGs main premise, which makes it no surprise that it was one of the few things you cited to support your position. They tried to show that there is a systematic difference of outcomes, and you accepted that conclusion and asserted it as some sort of "scientific consensus". Spagat et al showed that there isn't a systematic difference of outcomes and simply putting sources into two camps is basically uninformative for assessing outcomes. To the extent that anyone has tried to pin down particular outcome differences along those lines (OMG, Lancet, etc.) those conclusions have been refuted or disputed.
What you want, as you've made clear repeatedly, is for readers to draw conclusions about outcomes (the listed numbers) based simply on which of your two types you put these things into. And that one type is "superior" than the other at producing better outcomes. And your edit is supposed to help readers get these very important points. SMCK, among others, reject those points. There is no clear conclusion to draw about outcomes based simply on classifying two types, and there is no consistent relationship of outcomes between them.
One of the problems is that the "passive surveillance data" category, whether as mentioned in the Lancet study or OMG, or you here, is poorly defined and winds up being a vague hodge-podge of varying approaches which mostly have in common that they aren't statistically extrapolating from a sample. This is one of the reasons why SMCK use varying and often vague descriptions of that category from one place to the next. Sometimes they are talking about a broad category and other times they are talking about particular applications (like Iraq Body Count) that fall somewhere within that category. And there's a general lack of clarity about what OMG or others precisely mean by this category and what term should be used for it. SMCK seem to not accept OMG's term "passive surveillance data" as an appropriate description, but they don't adopt any other clear term either. The reason is because this category that is supposedly the dichotomous "other" to sample surveys has never been properly defined as a category anywhere and no definition has been widely used or accepted among researchers. The data SMCK is defending here covers dozens or hundreds of wars over the course of a century. Maybe these aren't "statistical survey estimates", but defining what they all collectively are isn't easy. They aren't all just "like Iraq Body Count", which is I think more or less what OMG was trying to go for here.
This "other" category gets defined rather vaguely by SMCK as "collating and recording reports of war fatalities from a wide variety of sources". But this is of course very vague, and I think it's actually not quite the same category that you have in mind in your edit. They go between terms like "report-based" and "incident-based" in different spots, but the "wide variety of sources" actually used in their data does not always fit into those terms, some are not "incident-based" for example, and sometimes it's "report-based" only in the sense that the estimate has been "reported" somewhere, like by a historian in a history book. In the edit on this page you are using the term "Documented", but none of these papers use that term. The only source I know to use that term regularly as a label is Iraq Body Count, but that is a term it uses to refer to itself, not some broad category of methodology type that stands in a two-way dichotomy with sample surveys, or at least not one that is coherently defined or widely used or accepted as such in citeable sources.
You have repeatedly asserted that your "Documented" type necessarily "produces undercounts" (outcomes) as a simple matter of being of that type, and this is one of the main reasons we have to do your preferred edit. But if your category is the same as OMG's or SMCK's but just with different terminology, as you insist, then SMCK says otherwise. Noting a "risk" of undercounting is not the same thing. Contrary to what you've been saying, they say such approaches can produce over-estimates, produce under-estimates or produce the right number. The line you quote only says they believe the risk of underestimating is greater than the risk of overestimating. And they also say that some of these sources "are as likely to overestimate as underestimate battle-death tolls." But clearly, according to them, you can not know anything about the outcome simply by it being in this category. They likewise note that sample surveys "risk" undercounting deaths as well for a variety of reasons. In fact, in the appendix they argue that small-scale surveys (like the Lancet studies) seem to fall into a pattern where they tend to either badly overestimate or badly underestimate violent deaths, rather than tending to get the right answer. For that matter, this and other sources have suggested that the difference between small-scale and large-scale surveys is a fundamental distinction for outcomes. Should we separate that "Estimated" category again into small- and large-scale survey groups too?
You assert that the "most important" point from SMCK is the line about "two main methodological approaches", but that is your own tendentious interpretation. I don't think it's very important at all here. The most important point they make is the main point of their paper, that you can not draw any meaningful conclusions about any given estimate or set of estimates based simply on which of these two broadly-defined (read: poorly defined) methodology types it falls into. OMG tried to show you could and failed. That is the most important point.
Your main premise up to this point has been that readers of this page can not be allowed to simply see a cursory list of a few widely cited sources in some bullet points in a sidebar (a sidebar which never mentions methodology anywhere else, even though it's giving all kinds of other numbers too). They *have to* see them only segregated into one of two (poorly defined, and dubiously labeled) methodological types. There is supposedly something really important they have to glean from this dichotomy about the outcomes (the listed numbers). SMCK directly contradicts that premise.
You also complain that I've "latched onto differences in terminology", but that is all you are actually proposing here: to slap a particular piece of terminology onto a set of bullet points, with no kind of context or explanation or direct citation. And your terminology contradicts the way the terminology is used even in the papers you are citing. One of your two categories here the terminology is "Estimated violent deaths" (which you're now terming differently as "survey-based estimates" or "statistical studies of mortality"). But SMCK refers to both of your categories as "estimated" deaths, as do many other sources, in contradiction to the actual edit you keep doing. You then say that your category of "Documented violent deaths" is the same thing as every other term thrown around in these papers for this poorly-defined category, but as I noted above, I don't think they are, or at best it isn't clear that they are. You say that this is the same as "report-based" or "incident-based" methodologies, but some of the sources used in the data OMG is criticizing and SMCK is defending just estimate a number and describe no "incidents" and could not be coherently defined as "incident-based". Maybe all the sources lumped into this category aren't extrapolating from a sample, but in some cases that may be as much as they really have in common.
My point is similar to SMCK's point. Nothing "fundamental" about outcomes can be determined simply by labeling sources as one of two methodological groups. These two groups are poorly defined and their definitions not widely or universally accepted (which is why your terminology is always getting contradicted by cited sources.) And the supposed outcome differences that you have repeatedly put forward as the main basis of needing your edit are also not widely or universally accepted either. Without this, there is no longer any point to segregating a group of otherwise simple and neutral bullet point references in the context of a simple introductory sidebar that never mentions methodology or methodological issues anywhere. The only point seems to be pushing the OMG-type POV that there is something really important to be gleaned about the outcomes from their/your two-type framework, and that one type is supposedly "superior" in its outcomes.
In other words, the only point actually left seems to be POV pushing. If there is a place for the kind of things we've been discussing here it is where the sources and their methodologies are actually discussed, and in which context it could be appropriate to cite the OMG view about what two-types means and what is supposedly important about this way of looking at things, and then SMCK's or other views about it. The appropriate place is not in a sidebar that has nothing to do with methodological discussion or debates and is just giving a handful of simple, neutral references to some widely cited sources.Billbowler2 (talk) 06:15, 22 May 2014 (UTC)
- Billbowler2, your wall of text is gratuitous. This is your main point:
- My point is similar to SMCK's point. Nothing "fundamental" about outcomes can be determined simply by labeling sources as one of two methodological groups. These two groups are poorly defined and their definitions not widely or universally accepted (which is why your terminology is always getting contradicted by cited sources.) And the supposed outcome differences that you have repeatedly put forward as the main basis of needing your edit are also not widely or universally accepted either. Without this, there is no longer any point to segregating a group of otherwise simple and neutral bullet point references in the context of a simple introductory sidebar that never mentions methodology or methodological issues anywhere. The only point seems to be pushing the OMG-type POV that there is something really important to be gleaned about the outcomes from their/your two-type framework, and that one type is supposedly "superior" in its outcomes.
- That's just completely false, and despite your repeated protestations that you've cited numerous sources that disprove my point, you haven't actually provided any sources that do so. Notice how when I cite journal articles, I clearly state how the source relates to the point I'm making, and cite the passage that supports my point directly. I'd like to see that from you, if there are actually any journal articles that make your point.
- As for the Spagat article, you're now willfully muddling the points it makes. The paper states very clearly that "report-based" methodologies cannot measure the total number of war-related deaths, and that the risk of under-reporting deaths is much higher than the risk of over-reporting. I cited the relevant passages in my above post. You state,
- "Contrary to what you've been saying, they say such approaches can produce over-estimates, produce under-estimates or produce the right number. The line you quote only says they believe the risk of underestimating is greater than the risk of overestimating."
- So far, every paper cited in this discussion that has had anything to say about overcounting vs. undercounting has stated that "report-based" methodologies likely produce undercounts. You're correct that the line I quoted merely states that the authors believe that the risk of underestimating is greater than the risk of overestimating. What experts believe is precisely what is relevant here. That's what gets included in the article.
- We also have several articles stressing the difference between report-based death tolls and statistically estimated death tolls. You haven't provided any articles that argue that this is an unimportant distinction. So far, the argument about the distinction being important comes only from you, because you don't think the results are meaningfully different. You have a right to an opinion, but not necessarily to have it represented in the article. Notably, the one review article we have on Iraq War deaths is organized along the principle I'm advocating: separating out report-based counts from statistical estimates.
- That's the delineation we should follow, since it's the delineation pretty much every paper on the subject stresses. If you don't have consensus to revert, don't do so again. -Thucydides411 (talk) 21:35, 22 May 2014 (UTC)
Insurgency is ongoing
The statement that the "Insurgency is ongoing" represents original thought, which is prohibited by Wikipedia rules. Who decides what is an "insurgency" as compared to a "defense of my homeland"? This is purely a statement of opinion which should be removed.Outofthebox (talk) 03:22, 10 June 2014 (UTC)
"Commanders and Leaders" - trim this list
I propose that we narrow down the number of "commanders and leaders" in the infobox; there is a case to be made that only commanders and leaders who had a direct and substantive role in the war itself should be listed in the infobox. The rest of the infobox is likewise too busy with details and not conducive to an average user coming to the site for encyclopedic content, and should similarly be cut down.
The original intent of this part of the infobox was to highlight the people who played a significant role that changed the course of the war whether through policy or through commanding battles, thus the WWII list should have the likes of Churchill, Roosevelt, Eisenhower, Patton, and so forth. The Prime Minister of New Zealand did not significantly contribute to any decisions that may have altered the course of WWII and therefore should not be included in the 'list of leaders and commanders'. The same should apply here. There is no forseeable reason why someone like Roh Moo-hyun or Silvio Berlusconi would ever be remembered for any significant role in the Iraq War; they are only on the list because they happened to be the heads of state of countries who contributed some coalition forces at the time the war was being fought.
My proposal for a revised list is below to allow for discussion and consensus. If there are no objections I will go ahead and perform the changes. Colipon+(Talk) 20:28, 19 June 2014 (UTC)
- I agree that the list should be more focused. However, I think that Tony Blair and Gordon Brown definitely warrant a spot on that list. Tony Blair is especially important to include, as he campaigned vociferously for the war and the UK sent a pretty sizable contingent to Iraq, and were the primary occupying force in the South of the country. -Thucydides411 (talk) 03:37, 20 June 2014 (UTC)
- I agree. Supersaiyen312 (talk) 14:29, 27 June 2014 (UTC)
- I do not know that we need this list, which will always be subject to debate. It seems most suited for wars of limited duration with a smaller number of commanders and leaders. TFD (talk) 16:18, 28 June 2014 (UTC)
Results Section
Rather than start a potential edit war with Freepsbane over this section, I'd like to build actual consensus on the terminology used here. Given that the Iraq War proper lasted until 2011, the current sectarian violence raised by ISIS is a separate conflict in its own right. Numerous sources point to that direct phrasing, and the inability of the Iraqi government to quell the spillover from the Syrian Civil War, while a result of the Coalition Withdrawal, does not change the historical nature of the original coalition War in Iraq. ♥ Solarra ♥ ♪ 話 ♪ ߷ ♀ 投稿 ♀ 20:53, 11 July 2014 (UTC)
- A bit of history, I originally reverted an act of vandalism yesterday which restored the terms 'Coalition Victory.' I don't see any history of any consensus reached on the exact phrasing used here. ♥ Solarra ♥ ♪ 話 ♪ ߷ ♀ 投稿 ♀ 21:06, 11 July 2014 (UTC)
- @Solarra: If you bothered checking Archive 29, this has already been dicussed. It's best to leave how it was on July 8 before User:Revihist changed it without consensus. This is just going around in circles at this point. Supersaiyen312 (talk) 21:11, 11 July 2014 (UTC)
- @Supersaiyen312: I concur, it is best to leave it as the list, I'll restore that. I agree that the list documenting the phases of the war is far more valuable than a blanket statement either way. ♥ Solarra ♥ ♪ 話 ♪ ߷ ♀ 投稿 ♀ 21:17, 11 July 2014 (UTC)
- It's more than a little disingenuous to claim removing a unsourced claim regarding victory constitutes vandalism. As mentioned we had a lengthy discussion about whether that label was appropriate and the overwhelming consensus was that it wasn't up to encyclopedic standards. Repeatedly reintroducing a clause against editor, and source consensus falls much closer to breaking undue weight guidelines. Plenty of sources including the Army[1] said Sunni militants like the ones lead by al-Douri were spreading and steeping up attacks against the government prior to the withdrawal. The only sources claiming victory are a few partisan outlets who by national and global standards are a tiny minority.Freepsbane (talk) 00:30, 12 July 2014 (UTC)
- @Supersaiyen312: I concur, it is best to leave it as the list, I'll restore that. I agree that the list documenting the phases of the war is far more valuable than a blanket statement either way. ♥ Solarra ♥ ♪ 話 ♪ ߷ ♀ 投稿 ♀ 21:17, 11 July 2014 (UTC)
Repeated insertion of content after being reverted and against consensus and without edit summaries is unacceptable edit warring. - - MrBill3 (talk) 05:33, 14 July 2014 (UTC)
- Given that they've now likely escalated to socks and that an older likely sock vowed to wipe out the other editiors' "lies" I'm not very optimistic about Revhist ever talking this out. Freepsbane (talk) 15:28, 14 July 2014 (UTC)