Talk:Gigabyte/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Centralization

Discussion about centralization took place at Talk:Binary prefix.

Must have been forever ago. Jeh (talk) 20:19, 3 August 2011 (UTC)

Both definitions of Gigabyte are correct

Okay, Before I get completely pissed off and start an edit war...

Gigabyte was defined as being 2^30 bytes before the standard bodies started mucking around with it's definition in the 21st century&mdash and that's largely due to disk manufacturers using a decimal rather than a binary scale for defining storage, which has resulted in numerous lawsuits. In all lawsuit cases, the courts fount the drive manufacturers to be misleading customers.

All publications relating to computer programming or engineering have used the binary definition for Kilobyte, Megabyte, Gigabyte since digital computers are base-2 machines not base-10 and memory address space has to always land on a power of two, while, in the decimal system everything lands on a power of 10.

When dealing with data transmission over communication lines, the term Bps is used, or Bits per second-- not bytes. So, in this case, it makes more sense to use a decimal definition that a binary one. There's a reason why Internet RFCs use "octet" to mean a collection of 8-bits, rather than referring to the byte.

Saying that SI is the only definition for Gigabyte is wrong. Sure, some standard bodies may chosen to recommend that Gigabyte mean 1 billion bytes, but Gigabyte as 2^30 bytes has been in use for far longer. Ignoring this fact completely is like ignoring if some standard bodies decided to redefine the Mole to mean 10.0×1023 atoms because it made people feel better than to use the more inconvenient definition of 6.0221415×1023 atoms&mdash and then completely ignoring the older, more common usage of the definition of the Mole, which, undoubtedly, scientists will continue the older, more accurate, however inconvenient and confusing, definition for the Mole.

It would have been far more intelligent of the standards committee to create two new names to avoid confusion, instead of redefining terms already in wide use (XXXXByte), and then creating another term (barf-biByte). Instead, they should have created a second term for the decimal definition for storage, something like Gigdebyte... but what can you expect from bureaucrats and group think.

And, the most important point: SI system is a suggestion. Nobody's bound by law to use it. If we choose to ignore their definition of a Gigabyte when it contradicts an earlier definition, then that is our choice. Because there is a choice to use SI or not, (not to mention the fact that the SI re-definition is extremely new), both the binary and decimal (aka SI) definition of the gigabyte are equally valid. and therefore, it is wrong to say simply that "XXXByte is a SI measure" because more often than not, it is not used in that way especially when purchasing flash memory storage devices, RAM, and when viewing your hard disk and RAM capacity in your operating system.

Both definitions of the gigabyte need to be given equal weight, however, it must also be made clear that the inclusion of Gigabyte in the SI system to mean 1 billion bytes is very new, and is not as commonly used.

Excuse me while I go correct the article again to reflect this. Dbmoyes Sep 28 04:57:38 UTC 2009

This has been hashed over many times, pro and con. Virtually all articles connected to the issue have some kind of discussion of this. But after all is said, the formal definition is the one accepted by the standards bodies, which are not just arbitrary decision makers, and consist of qualified engineers, scientists, respected by their peers enough to be members of those bodies. Kbrose (talk) 15:20, 28 September 2009 (UTC)
JEDEC is a standard body, no? They chose to use the binary definition. The fact remains that while the SI system may choose to define a Gigabyte in one way, Gigabyte also holds another, equally valid, definition, that had been used far longer that the SI's definition. Ignoring this fact especially in the face that users will more likely encounter the binary definition of a Gigabyte than SI's in every day use (like, when determining the size of a file on their hard drive) is irresponsible. As such, this needs to be mentioned in the first paragraph, not tucked away at the end as an, "oh well, some people choose to use gigabyte in this way, and it's wrong cuz IEC said so, and we're going to ignore all preceding definitions of gigabyte all together, burn the books and software that used the old definition of Gigabyte and pretend that never happened."Dbmoyes Sep 28 15:42:19 UTC 2009
Nope, just archive it and say they are not-correct to today's definitions. Update: JEDEC is adding notes about the IEC-prefixes in their new standards documents.
(This post is written in Februari 2010) And Dbmoyes, they haven't invented the IEC-prefixes to do something bureaucratic but to improve things. To SOLVE the confusion about decimal and binary use in information storage. I can see that you never been confronted with science a lot. If you're trying to do difficult things and have to calculate storage, you want a clear definition. Most educated people are fine with the new prefixes.
--Thelennonorth (talk) 15:09, 15 February 2010 (UTC)
Dbmoyes you are correct the JEDEC is a standards body. Thelennonorth it is incorrect to say the JEDEC "just archive it" and that claim completely misrepresents the actual standards for gigabyte using 2^30 published by the JEDEC. Kbrose also misrepresents the situation because that standard for inclusion in Wikipedia are reliable sources, since hardly any reliable use KiB, MiB and GiB then this is why WP:MOSNUM says KiB etc should not be used for most articles. Also since the majority of reliable source do use binary powers of two thn that is why Wikipedia uses kilobyte with powers of two. In this case Wikipedia specifically does not follow what the IEC "standards body" says because the majority of the real world does not follow what the IEC says on the subject. Dbmoyes you might be interested to note that Thelennonorth has tried to make certain unsupported claims on Talk:Binary Prefix which resulted in being asked to provide reliable sources, to which no supporting reliable sources have been posted. Also note that on the same page Kbrose tried to misrepresent the claims in the article which has now resulted in those claims being removed and a newer article being planned to correct the WP:NPOV violations Kbrose tried to add to the article in the first place. To Thelennonorth and Kbrose specifically, Wikipedia articles are meant to relfect the real world and not the world how you want it to be. Since you both have not provided credible reliable sources for your claims then they are not to be included in articles. Also to you both, article talk pages are not meant to be used to forum shop issues that have already been settled on other related talk pages. Dbmoyes, if you see this issue being forum shopped in the future then it might be a good idea to post an alert at WP:MOSNUM because the consensus is clear, IEC prefixes are not commonly used and therefore not to be used for the vast majority of articles. Thelennonorth, you should read WP:Forum shopping. It describes how keeping on asking the same questions until you get an answer you like is frowned upon. Resurrecting an old talk page thread to try to give a false impression of consensus is also frowned upon. Fnagaton 07:01, 16 February 2010 (UTC)
Quoting Thelennonorth: “Most educated people are fine with the new prefixes.” No, that’s not true. Perhaps “most educated intelligent Wikipedians who frequent this discussion page and debate this until the heat death of the universe are ‘fine’ with the IEC prefixes.” That is likely a true statement. The trouble is that even though the IEC's proposal is over ten years old, terminology like “kibibyte” wasn't even in the latest edition of the Microsoft Dictionary of Computer Terms at my library when I last looked. It isn’t used by any computer manufacturer on this pale blue dot when communicating to their customer base nor any general-interest audience. Moreover, none of the big-circulation computer magazines directed to a general-interest audience uses the IEC prefixes. The terminology is thus generally unfamiliar with the readership that comes to Wikipedia’s computer-related articles. That’s why we don’t use them here on our articles. And we certainly don't try to slant this article about what “gigabyte” really means in the real world; it is an ambiguous term and that ambiguity rarely causes problems in the real world—except for the incessant ranting here on this page by futurist technocrats who deeply believe that they can Change the World to Make It A Better Place. Just drop it please. The real world has soundly ignored the IEC’s proposal on binary prefixes and it’s going nowhere. Greg L (talk) 15:27, 19 February 2010 (UTC)
The mispresentation of facts. With educated people I mean the students and professors in the academy where I study. Saying they are wikipedians is definitely a lie, I'm the only wikipedian among them. They don't mind, some of them do find it an improvement and most just don't care. About the world not using those new terms. Just because they have been invented doesn't mean the whole world suddenly changes to them overnight. This happens slowly and gradually. Their success is at this stage dealing with how continuously this keeps going. Mind you for making premature conclusions.
As of this time (June 2010) Apple, in Snow Leopard: Mac OS 10.6 uses the right things (KB = 1000B MB = 1000000B and so. Apple also is a computer manufacturer, they also deal in hardware, so that makes the not used by any computer manufacturer claim from Greg L (talk) a lie. Ubuntu is discussing their unit policy: http://www.neowin.net/news/ubuntu-implements-units-policy-will-switch-to-base-10-units-in-future-release it will probably be done within a year. Other Linux distributions are also going this route. The Linux kernel has done the correct things from 2001. The world of which critics speak is getting smaller and smaller while the world of which proponents like me speak is, for some reason, getting bigger.
The real world has actually soundly ignored your facts that those prefixes are in powers of two. Most people think they are decimal. Don't come of with those lame stories about ICT-ers that have been trained to say the powers of two with these prefixes. Talk about real people e.g. average or six-pack joe that doesn't uses his computer sometimes in a month. Or other average people that do email, some browsing, chatting, watching movies, listening to music and watching their pictures. They have never cared about the prefixes and just think they are powers of ten. First show me some evidence that this is false or STFU!!
It also seems that Greg L (talk) ignores logic and other forms of reasonable discussion, this makes him look very biased. And I'm sure he is.
Hopefully the criticasters will read this instead of rambling about their schrinking world and producing all kinds of lies.
Two of them have been proven lies = wrong in my answer here.
Thelennonorth (talk) 13:21, 17 June 2010 (UTC)
Also it see that some of my statements have been misunderstood. With just archive it I meant that we shouldn't burn the old things and history like the Nazi's did with the stuff that was written and they didn't like. I meant to archive that history and refer to it as history. JEDEC does the GB = 1024^3B and other stuff as an official standard. My apologies for the confusion. The discussions, everything should be present in the wikipedia article. Including the TWO meanings. Including what the industry does with them. By all means, put it all in, because wikipedia is supposed to be an encyclopaedia about everything as much as practically possible. Thelennonorth (talk) 12:35, 25 June 2010 (UTC)

The problem is that the standards body for the U.S. computer industry is not the IEEE but ACM. The Association for Computing Machinery, whose membership is made up of those who have computer science degrees, not engineering degrees, has never recognized anything other than the decimal-divisible powers of 2 (i.e. 2^(10*n) where "n" is an integer) as the accepted values for these terms. Those who teach computer science are (eligible for if not) ACM members and couldn't care less what the IEEE says. A "gigabyte" has never been one billion (U.S. i.e. one thousand million) bytes but always 1,073,741,824 bytes (2^30). From the 1950's through 1999, that's the way it has been without question.

The argument of capacity for storage devices, especially magnetic and optical, is not compelling as the maximum size is usually given as the unformatted count of bytes, not the count usable for data storage. The difference is the bytes dedicated to synchronization, error detection, and sector identification (the latter moved into the medium after "hard sectoring" floppies from the 1970's lost the media hardware war). In contrast, electronic (silicon) computer memory capacity is always in powers of 2. 71.106.211.51 (talk) 00:04, 8 November 2011 (UTC)

"the standards body for the U.S. computer industry is not the IEEE but ACM." Untrue. The ACM is not a "standards body" at all. Please take a look at the ACM's web page regarding its Technical Standards Committee. Even back then, ACM itself never published any standards themselves; they instead worked with other standards bodies.
Just btw, even though it's called "The Association for Computing Machinery", many more of the ACM's activities, conferences, SIGs, and publications are about software than hardware.
"The argument of capacity for storage devices, especially magnetic and optical, is not compelling as the maximum size is usually given as the unformatted count of bytes, not the count usable for data storage. The difference is the bytes dedicated to synchronization, error detection, and sector identification..." I'm afraid you are misinformed on that point also. That is a common misconception, often erroneously offered to explain the difference between a drive maker's "500GB" claim and an OS's report of "465 GB".
I have in this particular laptop a hard drive marketed as "500GB". Windows says the drive (not the partition) has a capacity of 476,939 MB. 476,939x10242=500106788864 bytes, or 976771072 sectors of 512 bytes each. If I accessed this as a "raw" disk that's how much I could store on it, assuming that the 476,939 MB figure does not include a rounding error. The usable capacity for files under Windows is very slightly less due to partitioning and file system overhead but none of that difference is due to sector preambles or the like. Thus the figure quoted by the drive maker is absolutely not including things like sync and error detection overhead bits (if it was, it would be considerably larger than 500GB), as Windows is completely unaware of those.
This is not an isolated example; in my almost 40 years experience in the industry I have never encountered a hard drive with capacity marking that includes anything but user-accessible storage. Even back in the day when we did a true "low level format" on our hard drives, drives were marketed with capacities that reflected the end user capacity after such formatting with the drive maker's recommended parameters.
The same is true of (and has always been true of) optical media; the numbers just don't make sense otherwise - if those figures included e.g. CRC data they would be considerably larger. Jeh (talk) 15:01, 8 November 2011 (UTC)
"A "gigabyte" has never been one billion (U.S. i.e. one thousand million) bytes but always 1,073,741,824 bytes (2^30). From the 1950's through 1999, that's the way it has been without question." Also incorrect. The hard drive industry has, with very very very few exceptions, nearly always used megabyte to mean 1,000,000 bytes, gigabyte to mean 1,000,000,000 bytes, and now terabyte to mean 1x1012 bytes. And no, they are not counting e.g. ECC bits. Please note also that hard drives got to capacities quoted in gigabytes long before RAM did. Jeh (talk) 16:47, 8 November 2011 (UTC)
"In contrast, electronic (silicon) computer memory capacity is always in powers of 2." Not all silicon memory. I have here an "8 GB" (manufacturer's label) USB flash drive. Its actual capacity is 7,998,537,728 bytes. Not 8x10243, which would be 8,589,934,592 bytes. I invite you to check this out with any USB storage keys, SD cards, etc., you have handy. You will find that the "GB" on the manufacturer's label means 1,000,000,000, not 10243. What you state is true only of internal RAM, not all "electronic (silicon) computer memory". Jeh (talk) 16:47, 8 November 2011 (UTC)

lead sentence isn't very good

Here is the lead for megabyte:

"A megabyte (derived from the SI prefix mega-) is a unit of information or computer storage equal to one million bytes."

and here is the gigabyte:

"A gigabyte (symbol GB) is a unit of measurement of data, usually used in the context of computers"

To the casual reader or researcher, they probably know that a GB is a unit of measurement in the context of computers, but they would want to know what the unit of measurement stands for. It would be nice for all the byte measurement articles lead sentence to have a similar and more useful structure. The megabyte sentence does a lot more than the gigabyte sentence.

You can always fix it yourself. I will change it. - Omegatron 13:36, July 20, 2005 (UTC)

For every other use, it means exactly 1000³ bytes. In order to address this confusion, currently all relevant standards bodies promote the use of the term "gibibyte" for the binary definition.

This statement is incorrect. CD-R capacity has always been given in base2. DVD-Rs, unfortunately, don't use the more honest method (when dealing with computer storage). —The preceding unsigned comment was added by [[Special:Contributions/ 75.65.225.161|75.65.225.161]] (talk) 19:20:05, August 19, 2007 (UTC)

It's not incorrect, actually. It says "currently all relevant standards bodies", and CD-R manufacturers are not standards bodies. [Emboldening mine.] SarahTehCat (talk) 22:47, 28 April 2015 (UTC)

Sentence is rather uncorrect

In the article, there stands:

Most software applications have no particular need to use or report memory in binary multiples and operating systems often use varying granularity when allocating it.

This is a strange sentence because computers, when allocating memory, don't use varying granularity. Instead computers and computer programs pass the whole number without prefix internally after converting the input to a number. This is how computers work actually. Internally they work without prefixes, only with whole numbers. The prefixes are only used (added to a string of characters in an output box) when reporting to the user. This is misleading because people could think of computers not allocating memory in an exact manner.

Shouldn't this be that OS's and programs often use varying granularity when reporting it? Thelennonorth (talk) 17:47, 8 October 2009 (UTC)

No. The sentence is correct. When ALLOCATING the memory, differing granularities can be used. For example, it could be allocated in MBs, or in KBs, or in multiples of 47... depending on the situation. Any sufficiently skilled programmer will tell you this. Oh, and you mean "incorrect", not "uncorrect". Ironically.

Pronunciation (jiga vs giga)

The change by DaleNixon on Oct 13, 2003 is questionable. I have lived in the US for 20 years and I have never heard anyone pronounce Giga as Jiga except in the movie "Back to the Future". Even the director's commentary that comes with the movie's DVD admitted that the actors misprounced Jiga-Watts because they were clueless of what Giga was back then. Dale should clarify in which language Jiga is used. Kowloonese

My change is in fact correct. Please look up "gigabyte" at www.merriamwebster.com. You will note that both pronunciations are acceptable. I first encountered this in a databases class as a senior in college in computer science. My textbook pointed out the oft-ignored pronunciation of Giga as Jiga. DaleNixon 21:24, 15 Oct 2003 (UTC)
I agree with Kowloonese and have reverted Dale's change. Angela 07:32, 20 Nov 2003 -(UTC)
It's nice to see some people agree, but I didn't realize truth was a democracy. Did anyone do any research on this or is "I've never heard it like that, therefore it is not so" considered appropriate authority? Pronouncing "jigabyte" or "gigabyte" are both correct, even in American English. Brief research will turn up a soft 'g' pronunciation for the SI prefix "giga" dating back a very long time.

Get an older dictionary: you'll find it pronounced as "jiga" as opposed to "giga".

The current versions of all major dictionaries, OED [1], Webster [2], and American Heritage list "jiga" as the correct pronunciation. The greek root is "giant."


Ignorance is no excuse for mispronunciation to be accepted. In every field of study the prefix "giga-" is pronounced "jiga". With the rise in popularity of computers in the late 20th century and the early 21st century words like megabyte and gigabyte became popular, and with no proper instruction Americans, with their flawed english skills, decided to pronounce giga how it looked. If this same methodology was applied to "grand prix", no one would accept "grand pricks" as an acceptable pronunciation.
If you have no idea where this "jiga" pronunciation came from, just look at the word "gigantic"...which of course is pronounced "jigantic". In the same context, giga- comes from the Greek γίγας, meaning 'giant', and of course the definition of gigantic is:
"so exceedingly large or extensive as to suggest a giant or mammoth."
And although people dispute the pronunciation of giga vs jiga, I've never heard anyone pronounce gigantic with a hard g (as in giggle).--Huper Phuff talk 00:13, 26 August 2008 (UTC)
That's a biased opinion and not a neutral point of view, because I pronounce gigantic with a hard g.
I make this mistake because it's spelled with a G, not j. Have learned it's a soft g, will try to improve skills.
Besides don't the speaker needs to pronounce giga with a hard g to make a difference with jiga?
Thelennonorth (talk) 13:56, 11 September 2009 (UTC)

decimal prefixes wrong?

I believe that a gigabyte is 1024 Megabytes, not 1000. I am a computer engineer and we never use 1000, always 1024. this is further illustrated when you go a goggle calculator by using "1 gigabyte in megabytes" and the result comes back as 1024. This is clearly an error, i shall fix it. 19:26, 4 Oct 2004 (UTC)
I am afraid this is incorrect. Offically 1 gigabyte equals 10^9 bytes, and not 2^30 bytes. The problem is that some hardwarde and software manufacturers do no comply to this international standard. (These SI prefixes refer strictly to powers of 10. They should not be used to indicate powers of 2 (for example, one kilobit represents 1000 bits and not 1024 bits).[3]). Donar Reiskoffer 06:42, 5 Oct 2004 (UTC)
SI has nothing to do with this. while SI has a prefix "giga" it is NOT used in relations to computers. Computers (and bits by their very nature) are binary. Applying a powers of 10 rule to it simply doesnt make sense. as noted above google agrees with me. that page linked says nothing to the contrary as that it is using a decimal system. a kilobit is 1024 bits. sorry to have to break the news to ya. there's no way around that fact. Cavebear42 02:39, 8 Apr 2005 (UTC)
Google isn't an authority on measurement; if you're trying for argumentum ad populum, Microsoft would be a better choice. The fact of the matter is that the SI prefixes *are* used in computing; a 1 GHz processor completes 10^9 cycles/second, the Fast Ethernet standard specifies a data rate of 10^7 bits/second, and so on. Binary multiples are only used consistently to measure memory capacity, and there are physical reasons for that. Hard drives are logically divided into sectors of 2^9 * 2^3 bits for historical reasons, but at higher levels their attributes usually aren't round numbers in either decimal or binary; likewise other magnetic and optical media. 05:43, 8 Apr 2005 (UTC)
freq is clearly not the same as that it hardly relates to computing. it is a measure of time and therefore has nothing to do with biniary systems. that is clear and a weak rebuttal. a byte has 8 bits. a kilobyte is 1024 bytes (and always has been) a mega byte is 1024 kilobytes (and always has been) a gigabyte... im sure you can see where this is going. the ignorance of the general population and the desire for hard drive manufactures to write a higher number on their drives, does not change the correctness of the binary system being used to measure binary data. if you ahve a link to the fast ethenet standard saying otherwise, please post it. i ahve a degree in computer engineering and, haveing spent sevearl years of my life veiwing standards and doing binary math on a day to day basis, i have to disagree. also, play with google sometime, you might learnt hat it has the measurements database to back up many things, here are some links for you:
mile in feet cord in bussels mhz in hz gb in bytesCavebear42 06:05, 10 Apr 2005 (UTC)
See binary prefix and Hard drive#"Marketing" capacity versus true capacity.
Data rates, hard drive measurements, and the like are measured in decimal prefixes, and memory, CDs, and the like are measured in binary prefixes. - Omegatron 20:25, May 22, 2005 (UTC)
I am aware of the errors which people who are pushing the GiB notation (which has not been accepted in widespread use) have put all over the wikipedia. the article on biniary prefix is well written for the most part and uses the unpopular titles for the sake of clarity in discussion. there are some claims in there which should be verified such as the ones you have made in the previous comment. please feel free to come forward with proof that those are the accepted uses in those fields (perhaps from IEEE or such) and we can go about citing sources. I have not changed them to the correct uses because i have not done the same. the abstract (such as this article) are easy enough to back up and that is why i edit it. to state what constitutes common use would take more reseach (which i dont currently have time to do) Cavebear42 17:20, 23 May 2005 (UTC)
Cavebear42 is absolutely correct. The term "Gigabyte" is 2^30 bytes. Your computer will show you this also. Wikipedians that are trying to get the world to adopt Gibibyte for this quantity are grasping at straws, in my opinion. Many articles in Wikipedia are now misleading because of this, and this only leads to further confusion in the world. You will NEVER get the computer engineers who design all of the hardware and software in the world (I'm one) to adopt the idea that a Gigabyte is 1 billion bytes. It doesn't work with digital hardware, which uses binary addressing and counting. Since the term Gigabyte refers to digital hardware and software, it would be preferable for the average people to learn the real value, rather than trying to re-educate engineers who build these systems. Tvaughan1 01:15, 29 December 2006 (UTC)
The people who claim a gigabyte equals 10^9 bytes, are incorrect. By the binary system bytes are counted by power's of 2 because of the on/off nature of a bit (0 for off, 1 for on). And then of course 8 bits are a byte (still powers of 2...2^3 = 8). Hence why 2^8 is 256, 2^9 is 512, and 2^10 is 1024. We've all seen 256Mb RAM, etc. In reality the term "gigabyte" doesn't properly reflect the ammount it's describing, which isn't our problem. It's how it was chosen to be defined. Since it's easier to say "gigabyte" rather than "gigabyte plus 24 megabytes", the world embraces that fact that gigabyte is a fair representation of 1024 megabytes. Since 1024 isn't far off from 1000, there is no real issue with calling 1024 megabytes a gigabyte. People approaching the concept of a gigabyte from a "definitional" persepective are correct in ASSUMING a gigabyte is 1000 megabytes, but the people who understand the logic on which computers are based know that such an idea is absurd.--Huper Phuff talk 00:30, 26 August 2008 (UTC)
That's a very good idea. I'll start collecting references. - Omegatron 17:56, May 23, 2005 (UTC)
I've moved the references over to Talk:Binary prefix#decimal prefixes wrong?. - Omegatron 01:14, May 28, 2005 (UTC)
I'm no wikipedia editor, but I am a computer science grad. I have never seen, other than on boxes where companies are trying to make their mp3 players (etc) look bigger, the use of decimal notation for storage in this field. Also, I find the current article to be highly POV toward decimal notation. It seems to me that it states that binary is used, but that decimal is "more correct" somehow, and really comes across as inaccurate to me. Maybe while references are being collected and this debate is being played out, it can be revised to look more neutral and a little blurb can be put in on how there's a controversy over whether giga should mean a power of 10 or a power of 2. —Preceding unsigned comment added by 74.46.63.90 (talk) 00:23, 12 June 2009 (UTC)


A vote has been started on whether Wikipedia should use these prefixes all the time, only in highly technical contexts, or never. - Omegatron 14:49, July 12, 2005 (UTC)

Canadian usage

You said that a thousand million is a billion in American usage. Is there a Canadian usage that differs from it?? If so, what is a billion in Canadian usage?? User 66.32.68.243

British English used to use billion to mean a million million, but thousand million is the usual meaning nowadays; see Billion. 82.36.26.32 04:03, 2 Dec 2004 (UTC)

There are other countries in the world apart from the USA and Canada, you know ;-)

The Canadian usage is currently the same as the American usage. --Zippanova 18:59, 8 May 2005 (UTC)

1 GB = 1000 MiB?

I've had a webhosting provider who advertised 7 GB of space and turned out to have a quota of 7000 MB (by the binary definition). That is, they define 1 GB as 1,048,576,000 bytes. Has anyone seen this usage anywhere else? Is it, by any chance, notable? Graue 18:38, 22 May 2005 (UTC)

Hard drive capacity is usually defined as 1GB = 1000MB. This makes sense, as using 1GB=1024MB would lower the _apparent_ capacity. Jason 18:18, 10 November 2006 (UTC)

I've also seen HDD capacities quoted using a definition of 1,024,000,000 bytes. There is little or no standardisation in what terms are used by these people. JulesH 10:43, 13 December 2006 (UTC)
7GB defined as 7000MB is based on a "term definition" not a "binary definition". 7GB as 7168MB is a binary definition. Since the binary methodology says 1024 MB is a GB. But the term giga refers to 1000 megas. --Huper Phuff talk 00:39, 26 August 2008 (UTC)

Well, it's certainly happened before: the "1.44 MB" floppies are actually 1440 KiB in size: since the IEC abbreviations hadn't been invented yet, this could only be written as 1440 KB, and was then carelessly abbreviated as 1.44 MB, even though it didn't match either the binary or the base-10 definition of MB! --SamB (talk) 17:10, 2 February 2011 (UTC)

GB Abreviation

As a result of this confusion, the unadorned term gigabyte is useful only where just one digit of precision is required.

Well how about when DVDs are measured in 4.7 GB for a single layer, or 8.5 GB for a DL? I would like this to be clarified, the quote was in the first section of the article on "GB". Kb6110 15:48, 5 June 2005 (UTC)

please see binary prefix and its talk for all the info you are looking on such issues. the short answer is that it is the base 10 usage of GB set by the number of sectors included on the disc. I don't see exactly how you would like this edited but I'm sure that you can do such using that information. Cavebear42 18:48, 6 Jun 2005 (UTC)

Self reference

The current size of raw data for the English-language Wikipedia is about 2 gigabytes uncompressed.

What is the specific argument about excluding this self reference. The usage is not tangential from "gigabyte in use". In fact, it's the only reference to compare what a gigabyte actually is. The closest is just a simple application of the definition to hard drives. Cburnett July 3, 2005 21:01 (UTC)
I don't understand the question. - Omegatron July 3, 2005 23:12 (UTC)
See the page history for context. Cburnett July 4, 2005 02:26 (UTC)
At the time, I was thinking that readers of the article may not be viewing it on wikipedia, and may not understand what it is. However, there is a link to the term wikipedia, so I will withdraw that objection. However, what reference was used to come to that figure of 2 gigabytes, and when was it added to the article? I thought the amount of raw text in the English wikipedia would be much more than 2 gigabytes by now. Graham 03:32, 13 July 2005 (UTC)
I believe that this is innappropriate self-reference, has no direct bearing on wikipedia. It is wikicentric, something we are supposed to avoid. Why not give the size of the online encyclopedia britanica, or the size limit of a hotmail account, or anything else? Why specifically uses wikipedia as the example? It's just not very becoming, in my opinion. Themindset 18:29, 16 August 2005 (UTC)

Here are some alternatives: How Much Data Is That? - Omegatron 18:39, August 16, 2005 (UTC)

Legal definitions and disputes

I would like to suggest the addition of a section titled "Legal definitions and disputes" or "Legal claims" perhaps. There are several lawsuits settled and pending regarding the definition of Gigabyte. Apple, HP, Dell, and other large companies have been sued in the last couple years as have major manufacturers of Flash memory. I can provide some names and dates if anyone thinks such information is meaningful in an encyclopedia.

That would be great to know. Perhaps it's more appropriate in binary prefix? - Omegatron 03:46, July 16, 2005 (UTC)
I agree with the original comment, the Seagate lawsuit and settlement (Google 'seagate lawsuit gigabyte') should also be mentionend.
There should be a section on this page Gigabyte which mentions the historic background as well as the legal disputes. It should be mentioned on this page, that historically everybody used the 1024-based meaning before the mid 1980's for all computer related storage measurements. When the first Gigabyte hard drives came on the market in the mid 80's, a few hard drive vendors started to use the 1000-based interpretation which started to confuse the customers. This came from the marketing departments. Competition to join the "Gigabyte" league of hard drive manufacturers was fierce at that time, and perhaps some companies wanted to gain a competitive edge by having capacity numbers which "looked equal" to the competition, where in fact the capacity was smaller. Many computer professionals saw this as "false advertisement". The lawsuits and settlements which followed much later (e.g. 2007 Seagate settlement) seem to indicate that this was in fact false advertisement. —Preceding unsigned comment added by 75.51.145.144 (talk) 14:47, 15 August 2008 (UTC)
I disagree that "historically everybody used the 1024-based meaning", and with the speculative history, but I agree that the lawsuits are of historical interest. They were recently deleted from the Binary prefix article; feel free to restore them when you have the time. shreevatsa (talk) 00:23, 16 August 2008 (UTC)
I'm old enough to have witnessed that "historically everybody used the 1024-based meaning" - I think the only people who would dispute this would be the ones who work for a hard drive vendor.. —Preceding unsigned comment added by 75.51.145.144 (talk) 18:53, 23 August 2008 (UTC)
I'm not sure the "everybody used the 1024-based meaning" period overlapped with storage being measured in gigabytes... maybe that's the issue? --SamB (talk) 17:13, 2 February 2011 (UTC)
Not sure what is meant by "historically". In the history I lived through, hard drives got to the gigabyte threshold long before RAM did (on anything close to affordable machines, at least). Tape capacity (DDS at 1.3 decimal GB, Exabyte at 2.2) was there too at about the same time, again long before we had any GB-sized RAM to talk about. And that was in the days when DOS was just spitting out big long numbers for available capacity, no MB, GB, or anything else. Similarly the first hard drives were quoted in decimal megabytes (if not in just big long numbers) long before there were MB-sized RAM configurations. So at each threshold except KB, the decimal meaning was very much used first. Jeh (talk) 18:55, 2 February 2011 (UTC)
When the first Gigabyte hard drives came on the market in the mid 80's, a few hard 
drive vendors started to use the 1000-based interpretation which started to confuse
the customers. 
Sorry but this is not correct. Hard drives have always been quoted using decimal prefixes, going all the way back to the IBM 350's "5 million characters." So it is not a question of "a few vendors" or "started to use." See the Binary prefixes and Timeline of binary prefixes articles for, well, the timelines and links to many early products from the PC field. Jeh (talk) 19:00, 2 February 2011 (UTC)
The lawsuits and settlements which followed much later (e.g. 2007 Seagate settlement)
seem to indicate that this was in fact false advertisement.
No, only that a judge and maybe jury were convinced that just because "dozen" sometimes means 13 to a baker, eggs should come 13 to a "dozen" as well. In any case (heh), the info on the lawsuits is at the Binary prefix article, which was rewritten extensively since most of the above discussion took place. Jeh (talk) 19:04, 2 February 2011 (UTC)

Self-reference again

I really don't see the point of adding the information about wikipedia's size to the article. It can get out of date quickly, requiring unnecessary edits to the page. The text is probably 7 gigabytes, but all the page histories combined probably take up much more than that. But most of all, I think it looks unprofessional in wikipedia, seems to be boasting about the size of it, and is irrelevant to users of wikipedia mirrors. Therefore, I will remove it. Graham talk 10:09, 20 May 2006 (UTC)

1024000000?

A moment ago I corrected the statement that Seagate uses 1 GB = 1024000000 B, since Seagate themselves plainly state otherwise. On the other hand, this HP statement about Seagate must have some source or reason. Can anyone find a place where 1024000000 is actually used?

I think that it should be removed entirely. The article states, "There are two or three slightly different definitions of the size of a gigabyte in use." There are in fact only two definitions in use, and one definition that exists in one place and is misattributed to a company that does not even use it. Nobody uses the 1,024,000,000 definition. I will delete it. Add it back if anyone can find proof that it is actually used. --Mugsywwiii 02:39, 11 October 2006 (UTC)
I have in front of me a Seagate disk that was sold as 3.2GB. Its geometry is 6253x16x63, i.e. it contains 3,227,148,288 bytes of storage. It is labelled as 3227MB. Therefore I can conclude that Seagate, at least, weren't using this definition of a GB (or the equivalent definition of a MB) when they manufactured this disk.
I also have a Samsung disk with 4,311,982,080 bytes labelled as 4.3GB, so also not using this definition.
And a Fujitsu disk with 4,325,529,600 labelled as 4.32GB (which is strange, it should really be 4.33GB)
BUT: I have a Maxtor disk with 80293248 sectors (41,110,142,976 bytes) labelled as 40GB. This does appear to be using this definition (it's 41GB by the SI definition, 40.14GB by the 1,024,000,000 definition, or 38.29GiB). I don't have it with me, but I'm pretty sure my old Western Digital 10GB drive used this definition also.
It's also analogous to the definition of a MB as 1,024,000 bytes (which is used, for instance, in the term "1.44MB floppy disc").
So, I'm pretty sure this definition is in use, but perhaps not by Seagate. JulesH 10:56, 13 December 2006 (UTC)
Also worth noting is that for any individual manufacturer, the engineering team may be using one definition, but the marketing team are perfectly free to use any definition that has a smaller capacity. That can simply be explained away as the customer getting a little extra space free. I strongly suspect that many storage engineers use the 1,024,000,000 byte definition (because in terms of disk sectors, which are 512 bytes, it works out to an even number) while most marketing people would look at it and say they couldn't explain *why* they used that to their customers, and just use 1,000,000,000 instead. Does that make sense? JulesH 11:08, 13 December 2006 (UTC)

Price of storage

Is the price of hard drives really needed here? If we include this, it seems like RAM and optical storage should be included to. It feels out of place and easily out of date. I am removing it, if anyone feels this is wrong, feel free to justify and revert. Timbatron 06:17, 22 October 2006 (UTC)

Also, "Gigabytes in different products" seems like a "Trivia" section, and should probably be removed. It's a bit like if we had "Uses of the centimeter" talking about all the things that can be measured in centimeter Timbatron 06:19, 22 October 2006 (UTC)

Back to the Jiga vs Giga debate

The American Heritage Dictionary, 4th Edition, 2000 seems to feel that "Jiga" is more correct. They also have an entry for gig itself, in reference to gigabytes, as opposed to the short job, military demerit, carriage or boat, or even the fish hooks that I've always thought were jigs but are all apparently hard 'G' gigs.

Anyway, I find the bit on pronunciation rather silly, and until I or someone else gets a chance to clean it up I've added some fact tags. I don't know enough Greek to say how "gigas" is pronounced. Ciotog 07:43, 7 February 2007 (UTC)

According to A Dictionary of Units of Measurement by Russ Rowlett the Greeks pronounced it with a hard 'g', so I'll modify the article, but of course Greek pronunciation has changed over the millenia. Ciotog 17:56, 23 March 2007 (UTC)

Logarithmic?

The paragraph beginning with

"The difference between SI and binary prefixes is logarithmic"

is confusing. The point of the paragraph is to show that with "bigger" prefixes, the difference between the decimal and binary values is more pronounced; more precisely, the ratio decreases exponentially, not logarithmically.

To see this, notice that the size of the k-th prefixed decimal value is 10^{3k}, while the size of the k-th prefixed binary value is 2^{10k}. The ratio between them is (1000/1024)^k, which is a decreasing exponential function, not a logarithmic one. (Consequently, we can see that as k gets larger, the ratio actually tends to zero; this may be a stronger observation than simply saying that the difference is more pronounced.)

DoctorJS3 19:50, 7 June 2007 (UTC)

I noticed this too. I'll change it.I Love Pi 02:07, 23 August 2007 (UTC)

I disagree that "the ratio tends to zero" is better than saying the difference is more pronounced - I could see it being misinterpreted as the "difference tends to zero", which is of course the opposite. Ciotog 02:26, 23 August 2007 (UTC)

Gigabytes vs. MP3

If you use MP3s per gigabytes as example the false definition of giga = 1024^3 is not very sensible because MP3s are almost always placed on a disk (HDD or Flash) and not in RAM which means the standard definition giga = 1000^3 is more reasonable. 1024^3/(128*1000/8)/3600 = 18.641 is rounded 19 hours. 1024^3/(128*1024/8)/3600 = 18.204 is rounded 18 hours. This makes me assume someone assumed kbit is 1024 bit. It isn't. That's even clear after reading MP3. --217.87.98.171 04:21, 1 November 2007 (UTC)

Doesn't matter. Many computer users are using systems where "GB" or "gigabyte" refers to 1024^3 in all situations, not 1000^3, regardless of whether hard drive space or RAM is being tallied. Since the mention of MP3s is as a practical example, not a technical one, the calculation should at the very least reflect the most common practical or colloquial meaning for "gigabyte". However, since there are some situations where the other definition of "gigabyte" (1000^3) is seen, it might be worthwhile to include both examples and indicate clearly which one is which, and in what situations each is likely to be encountered. Your observation on "kbit/s" always referring to 1000 bits per second is correct, though, from what I've been able to glean online.
By the way, "gigabyte" has two definitions, one being 1000^3 bytes and the other being 1024^3 bytes. Neither definition is "false". --DachannienTalkContrib 06:17, 1 November 2007 (UTC)
That's the opinion of some people who cannot or do not want to differ between slang and standards. If you had actually read the article, you'd realized that storage like HDDs is always labeled using the correct meanings of the SI prefixes. So a 1 gigabyte HDD can hold 1000^3 bytes but certainly not 1024^3 bytes. So if you're actually ignoring what the article explains and keep using gigabyte ambiguously, there's pretty much no point in any of that text at all. If the article explains gibibytes and example wants to use gibibytes, why does it not use the term gibibytes? --217.87.98.171 07:18, 1 November 2007 (UTC)
Dachannien is correct. And once again Sarenne, this is not the place to push your views and agenda on kibi/mibi/gibi useage at Wikipedia. Same old stance, same old tactics, same old attitude, etc. As you were explained by Fnagton on your other ip related talk page, these things are set by WP:MOSNUM, which is the standard here. Take it up there. Your arguments, right down to the very wording, have all been heard before - all by Sarenne. You're not saying anything new, or doing anything different than the last time. And there's a lot more involved then just a "few people". As you were also told, "MOSNUM has been changed due to debate and consensus being formed by a large group of editors agreeing what should be done. You are also mistaken, the pro-IEC binary prefix arguments do not hold water and that is reflected in the result of the debate and consensus." To borrow some of your usual sarcasm: Any further questions? Then take them to Wikipedia talk:Manual of Style (dates_and_numbers) --Marty Goldberg 07:41, 1 November 2007 (UTC)
Really, the reason that hard drives are measured in 1000^3 (or, previously, 1000^2) bytes is because it makes the resulting significand bigger (and bigger is pretty much always better when you're trying to sell stuff). So, they can use the term "gigabyte" correctly, but once you install Windows and check the hard drive size, you'll find a smaller number there that also uses "gigabyte" correctly. Data rates might be affected by the same motivation, but it doesn't really matter since measuring bits per second already includes a dimension unhindered by the design requirements that bring in powers of two.
On the other hand, memory is regularly described as being measured in 1024^3-style gigabytes because "gigabyte" could hold the meaning of 1024^3 bytes long, long before anybody invented another term for it. Evangelizing the use of "gibibyte" is no more likely to eliminate this usage of "gigabyte" than in countless other examples of entrenched terminology being virtually impossible to dislodge.
Forcing people to learn the IEC prefixes on Wikipedia, by eliminating the colloquial usage of the original terms in every article where they appear, is similar to forcing people to learn IPA (though perhaps to a lesser magnitude) - it adds confusion due to the unfamiliarity of the system without really adding to the informative value of the articles. If having a 1000^3 version of the example there adds information to the article, that's great - but I simply don't see how removing the colloquially-used 1024^3 example also adds information to the article. --DachannienTalkContrib 10:16, 1 November 2007 (UTC)
If "bigger is pretty much always better when you're trying to sell stuff", then why isn't RAM labeled using standard SI prefixes? RAM and especially cache is much more expensive than HDDs. Do you think an IT expert would be confused by a 268,43 MB RAM module and pick the 256 MB instead? Please, understand that it is a myth and a urban legend that any vendor picked the SI prefixes to confuse customers, especially so because they decided this long before any laymen were able to afford something like a hard disk. It's also interesting to note, that nobody accuses network device vendors of anything albeit "10 MBit/s" uses the SI prefixes, too. So why would there be a HDD conspiracy but not none in any of the other cases? Regarding the examples. People always reject the IEC prefixes based on the claim, that they add confusion (albeit their intention is to reduce confusion), so how useful is it to provide examples that leave the reader confused because they do not even state which meaning of gigabyte is used to calculate the given value? Isn't there a logical conflict in this way to argue? For what it's worth, I made it clear for some examples except for those that I cannot verify. --NotSarenne 02:07, 2 November 2007 (UTC)
Because at the time, the IEC terms had not been invented yet, and memory modules have a technological constraint that causes them to almost always be produced in quantities that are power-of-two multiples of 1024 (or 1024^2, or 1024^3 as time progressed). The failure to adopt the IEC prefixes worldwide may represent a "legacy" status in your opinion, but it also is, simply put, the way things actually are.
I am not denying that the IEC prefixes were invented to reduce ambiguity. I understand that that is their whole point. But the fact that they are not universally adopted means that it is necessary that this Wikipedia article - and other similar articles involving the use of binary prefixes in the context of electronic devices - reflect the state of the world as it is today, and that includes the fact that in at least part of the world, hard drive manufacturers use "gigabyte" to refer to a billion bytes and network device manufacturers use "gigabit" to refer to a billion bits per second, while memory manufacturers use "megabit" and "gigabit" to refer to 1024^2 and 1024^3 bits. --DachannienTalkContrib 06:05, 2 November 2007 (UTC)
I don't think that universal adoption is a requirement for their use. SI units are not universally adopted either, for example, in the USA at least in non-scientific contexts. Nobody denies that these binary prefixes are rather new, so I cannot see how this can serve as an argument against them. Arguing this way would make any kind of change in any context impossible. That's really a logical dead end. Also hard disk vendors do not use these units and prefixes in this way just in one part of the world. They have been using them consistently all over the world for several decades and never used any other definition. Also note that the term billion is even more dangerous because a confusion over the term billion can yield wrong values by a factor of 1000. Furthermore, there a several examples where science (and facts) disagree with the public opinion or perception that is there are a lot of myths and urban legends. --NotSarenne 14:55, 2 November 2007 (UTC)
On a side note, I think the last few changes you made are improvements to the article. Thanks. --DachannienTalkContrib 06:10, 2 November 2007 (UTC)
Exactly Dachannien. Wikipedia is descriptive, not prescriptive. To be descriptive means using the terms that are found in the majority of relevant reliable sources for each article. For example if an article describes a topic with predominately British English spelling in the sources then British English spelling is used, i.e. it isn't up to Wikipedia to fore people to spell one way or the other. This kind of thinking is reflected in many places in WP:MOS. Fnagaton 12:30, 1 November 2007 (UTC)
English is not standardized, there is no authority for the English language. Please, understand that convention and standard not the same thing. The IEEE, however, does publish standards and the IEC binary prefixes are also part of legal standards in several EU countries. MB vs. MiB is always never a style issue, otherwise there weren't any lawsuits about this. You cannot even compare this to feet vs. meters conversions because feet and meter (or pick any other outdated/modern units) are (nowadays) unambiguously defined and used. As the long-winded discussions prove, it's not that simple in this case. --NotSarenne 02:13, 2 November 2007 (UTC)
The fact that they are not universally adopted even by those who define the standards means that it is required that this Wikipedia article and other similar articles involving the use of binary prefixes in the context of electronic devices reflect the state of the world as it is today. So do not use KiB or kibibytes etc. Use KB and kilobytes etc. QuinellaAlethea 19:09, 3 November 2007 (UTC)
You are ignoring WP:MOSNUM which currently clearly states that IEC standard binary prefixes may be used and the first major contributor of an article may freely choose. Do not tell people that they should not use the terms KiB or kibibytes where appropriate because they clearly reduce ambiguity and confusion. Also, please, refrain from using non-arguments such as "state of the world as it is today". --NotSarenne 20:09, 3 November 2007 (UTC)
Using KiB/kibibyte units when KB/kilobyte units are used in the relevant article reliable sources does not reduce ambiguity and confusion. If someone was to create an article using units that are not included in reliable sources then those edits are in violation of WP:NOR and can be changed. Fnagaton 20:24, 3 November 2007 (UTC)
Sir, you are absolutely wrong. WP:NOR does not prohibit in any way conversion of units to current standards. Likewise, especially if the original source is making it clear that 1 MB is used to measure one mebibyte, this is not original research at all. It is simply using the information provided by the source. I can tell you that in my handbooks it is explicitely stated that 1 MB is used as short-hand for 220 in this book. Even if this clarification is not provided by the original source, this can be made clear by any editor who is an expert on the topic as per WP:OBVIOUS. --NotSarenne 20:59, 3 November 2007 (UTC)
I'm correct you are wrong. Wikipedia is descriptive not presciptive. To use different units to those used in reliable sources is trying to be prescriptive, which is wrong. Fnagaton 21:06, 3 November 2007 (UTC)
No, it is not prescriptive. The IEC standard for binary prefixes has not been decided by Wikipedia. Wikipedia is, in fact, prescriptive if it refuses to accept the current international standards. It does not matter one bit whether a majority or minority is using the IEC standard binary prefixes. In this case, it is even quite obvious that if you're looking at sources older than 1998, you will not find any mention of kibibyte, KiB and so on. --NotSarenne 21:25, 3 November 2007 (UTC)
Again you are wrong because it is presecriptive to try to change to units that are not being used in the relevant reliable sources. Fnagaton 21:28, 3 November 2007 (UTC)
This is not correct. You might be thinking of quotes. Quoted sentences should not be modified. You are not supposed to copy original sources. Therefore, when paraphrasing it is perfectly normal and desirable to convert units to current standards. --NotSarenne 21:38, 3 November 2007 (UTC)
It is correct and I am not thinking of quotes. It is possible to convert however the original units are to be used in the main article with the conversion being a footnote or in brackets to disambiguate. The conversion units are not used everywhere in the article, they are to be used sparingly as is the case with disambiguation. Your edits removed the units used in reliable source and replaced them which makes yours edits against MOSNUM. Fnagaton 21:45, 3 November 2007 (UTC)

Claimed dataloss

The current article claims: In operating systems supporting virtual memory, conflicting definitions can even lead to data loss. For example, if an operating system specifies 1 'gigabyte' (2^30 bytes) virtual memory, but the storage device has 1 gigabyte (10^9 bytes) available, it can lead to data loss when application is being swapped to this region of non-existent memory. Please, explain how an OS could get the idea that device has more capacity. The OS gets the number of sectors and the size of a sector from the device. This cannot go wrong because a computer does not internally calculate with prefixes. Prefixes are only used for displaying values nicely to humans. So even if humans disagree about the meaning of giga, the computer does not care as it calculates with the raw, unscaled values. So the scenario above is either a completely unrealistic made-up example or there is some crucial information missing. --217.87.98.171 04:52, 1 November 2007 (UTC)

I couldn't think of a real way for this to occur, either. The example seems like original research to me anyway. --DachannienTalkContrib 10:18, 1 November 2007 (UTC)
Yes I agree with User:217.87.98.171 on this, this is a nonsensical scenario. Mahjongg 11:43, 2 November 2007 (UTC)
It's very simple, bugs!
To the anonymous user, computers are machines and they can malfunction because of a variety of reasons.
Seems the following could have happened: User inputs 1GB in virtual memory. Program translates to 1*1024^3B because it's Virtual work memory (RAM) that's made in powers of two. Then the user reserves 1GB on the HDD for swapping. But because it's a HDD the program converts the value to 1*1000^3B. When the computer swaps the data to the storage device, it looses some of it. Heard about something that was a bug that did this. This is NOT the RIGHT PLACE to discuss this.
Thelennonorth (talk) 14:04, 11 September 2009 (UTC)
Ah, no, that isn't how it works. The claim of "data loss" by an OS "mistakenly" thinking it has 1 GiB of hard drive space to write to when it really only has 1 GB is nonsense. Jeh (talk) 20:57, 25 June 2010 (UTC)

Seagate Offers Refunds on 6.2 Million Hard Drives because of false claim that 1GB = 1 billion bytes

Well the latest development that proves the MiB, GiB notation is NOT accepted by the general public can be found at a slashdot article here: [4] . The article states:

"Seagate has agreed to settle a lawsuit that alleges that the company mislead customers by selling them hard disk drives with less capacity than the company advertised. The suit states that Seagate's use of the decimal definition of the storage capacity term "gigabyte" was misleading and inaccurate: whereby 1GB = 1 billion bytes. In actuality, 1GB = 1,073,741,824 bytes — a difference of approximately 7% from Seagate's figures. Seagate is saying it will offer a cash refund or free backup and recovery software."

That should put to rest the seemingly still lingering notion that Wikipedia should adapt the SI notations KiB MiB and that consequently the original notation of KB MB etc. should be converted to reflect decimal sizes such as 1000, 1000.000 etc. Mahjongg 11:36, 2 November 2007 (UTC)

The headline at Slashdot is just Seagate Offers Refunds on 6.2 Million Hard Drives and because of false claim that 1GB = 1 billion bytes was added by which is your personal opinion. It certainly does not appear anywhere in the article and it's also not the message of the article. Let me explain: In the US law system, each party has to pay their own costs. That means even if you win a case because you're innocent, you still have to pay for your costs. That's why a settlement is a very reasonable move. First of all, it reduces bad publicity by not causing some customer excessive costs. Second, only very few customers are going to make use of the refund offer but even if all of them do, the costs will be likely much lower than those of a full-blown lawsuit, no matter whether they win or lose in the end. Anyway, you might have noticed that they have not been sued for using KiB, MiB and GiB. I even dare to speculate that if they actually used the IEC standard prefixes - which should make all Windows/Mac OS users happy - there would be no chance of a lawsuit because confusion could at best be caused by missing or ignoring the 'i'. Though if you misread some product description or contract, it's clearly your own fault. I also think that contrary to your conclusion, this is actually an argument for using the IEC prefixes because KB, MB, GB are quite obviously causing confusion. Note that this confusion would even exist if nobody had ever used MB in the SI prefix sense because any non-computer experts and laymen is likely to assume that M means Mega means 1,000,000 as it is the standard in any other technical field. --NotSarenne 14:08, 2 November 2007 (UTC)
This is indeed an argument for using the IEC prefixes. --Thelennonorth (talk) 18:01, 14 February 2010 (UTC)
  • Good, Thelennonorth, you are free to waltz on into a Best Buy and tell the sales clerk you are looking for a “two gibibyte hard drive”. While you’re at it, you can use the proper “jibi-” pronunciation to be exceedingly correct. Allow me to shoot video while you’re doing this. Who knows, maybe your routine use will help the IEC proposal to catch on; they need all the help they can get.

    In the mean time, let's keep to the facts, shall we, in Wikipedia's articles and take care to not slant the article to imply that just because the IEC made some proposal over ten years ago, it is now somehow *improper* to use “gigabyte” to mean either 109 or 10243 bytes. Yeah, I know: “But… but, that sucks and there’s a Better©™® way!” Well… that’s just the way the computer world really works nowadays. The existence of the IEC’s proposal doesn’t change the fact that it is improper for anyone under any circumstance to communicate—in writing or verbally—to an audience in a way that causes unnecessary confusion; particularly because the very terminology one uses is unfamiliar to the intended audience.

    I know: “linking” the term. It does no good to link unfamiliar terminology here on Wikipedia when it is exceedingly unlikely that a reader will encounter it anywhere else after leaving Wikipedia (unless they’re standing in a Best Buy store, listening to you talk to a sales clerk). Slanting this article to imply that it is improper to follow the way the real world works (eschew the IEC prefixes) does our readers a major disservice. When the real world has mostly jumped onto the IEC-prefix bandwagon, Wikipedia can reflect that fact in its articles and write about how the IEC prefixes are not only unambiguous and holy, but should be used when communicating to a general-interest audience; not before then. Greg L (talk) 18:30, 19 February 2010 (UTC)

Your suggestion looks like a fun way to make them look bad (also laugh with them). Looks like fun to make fun of those other people! Also is that special slogan with Better in it officially trademarked or still on the market? Good, Greg L, you are free to waltz on into a cybercafe with a laptop showing the file size of a file rounded with GB. Now ask the people who sit there what the G stands for. Allow me to shoot video while you're doing this + prepare statistics afterwards.


Now seriously, nobody cares enough to do these details right. When explaining the difference I explain the term gibibyte to laymen. And what it means for them. Also why there is confusion. (NOT the HDD makers, but the stupid OS makers (Microsoft with Windows is showing it wrong). An OS is is used to dumb down computers in the first place.) I do NOT object describing everything, the two meanings in wikipedia. Wikipedia is supposed to be an encyclopaedia about everything. It's only natural to demand it describes every subject completely. I'm having troubles people touting GB in it's binary meaning while confusing laymen about it's meaning. Djeez, it's not the beez knee to use it that way. With saying gibibyte, you will know if your audience understands you correctly or not. Stop being so zealous about it. Thelennonorth (talk) 13:26, 25 June 2010 (UTC)

Consumer confusion

"The basis of the problem is of course that the official definition of the SI units is not well known" - of course? - Isn't this a POV? This either needs rephrasing or some citation is needed. As an ex-engineer in the UK I always used SI units, although I am quite aware of the problems with disk and memory usage; this is the first time I have come across the terms GiB etc. Ray3055 (talk) 19:02, 26 November 2007 (UTC)

Remove trivia tag from "Gigabytes in use"

I have removed the {{trivia}} tag from the Gigabytes in use section because I believe this section to be much more useful than mere trivia. It is impossible for most people to grasp what a number of this magnitude means without some practical points of reference. (I remember years ago seeing a print article about McDonald's "billions sold" tagline that explained how deeply 1 billion hamburgers could bury Manhattan Island.) Additionally, it's very useful for passers-by here to learn how much music their n-Gb media player should be expected to hold, etc. However, though I'm drawing a blank, I think that some bright editor could find a better name for the section. --Kbh3rdtalk 21:42, 11 June 2008 (UTC)

I've restored the tag, because it clearly meets the definition located here. --DachannienTalkContrib 22:58, 11 June 2008 (UTC)

Tera- Should Be Added Under the JEDEC Column of the Prefixes Chart

I noticed this wasn't in there. I think that terabyte storage is common enough to warrant it's addition to the list. I've never heard of a perabyte or pegabyte (I guess those could be the equivalent of the IEC's pebibyte), so I can see why that is not in the list, or anything larger. However, there is terabyte storage, and I think it is now common enough to be added to the list for the JEDEC column. I would edit this myself, but unfortunately I don't use Wikipedia enough to know how. —Preceding unsigned comment added by Anathematized one (talkcontribs) 06:17, 1 July 2008 (UTC)

Never mind, I figured it out and changed it. --Anathematized one (talk) 06:25, 1 July 2008 (UTC)
I reverted your change because the terabyte is not defined by JEDEC. Nor is it defined (to my knowledge) by any other standards body in the binary sense. Thunderbird2 (talk) 12:10, 5 July 2008 (UTC)


Example list Sizes

``Dual-layer Blu-ray Discs can hold about 50 gigabytes (50,000,000,000 bytes), dual-layer HD DVD discs 30 gigabytes (30,000,000,000 bytes) of data.´´

Should it be as is or:
50 gigabytes (53,687,091,200 bytes) , 30 gigabytes (32,212,254,720 bytes) or...
about 47 gigabytes (50,000,000,000 bytes) , about 28 gigabytes (30,000,000,000 bytes) or...
something else...

As:

50,000,000,000 bytes is 46.56613       gigabytes
50 gigabytes         is 53,687,091,200 bytes

30,000,000,000 bytes is 27.93968       gigabytes
30 gigabytes         is 32,212,254,720 bytes

Citation for storage capacities

Please do not delete my edit about how a storage drive has less space than it advertises. If you want citation, you need only to go buy these things and see for yourself, just like you would go buy a book that was being cited; just like you would buy a subscription to a magazine being cited. Because of the ease of access to this citation, that makes it common knowledge, even if you didn't know it beforehand. Also, I was careful not to include any weasel statements or anything else that is against the rules, so please, do not delete it.70.178.75.61 (talk) 21:03, 4 April 2009 (UTC)

Actually the storage manufacturers put it on the box now that they mean that GB for them is 1000000000B. This makes them somewhat very correct, reliable. When you look at the full number, you will see that it's correct, that the drive maker is honest. Never thought that the OS could be lying to you?
Thelennonorth (talk) 11:13, 18 June 2010 (UTC)

Unix Installation Size

I was just wondering about the note that a Unix installation size being "less than a gigabyte". Most unix-like operating systems I'm aware of can be installed on anything from a couple megabytes up to several gigabytes, so I don't think this is helpful at all. [edit] Just an idea: Maybe note a specific operating system, like "Microsoft Windows Vista is approximately 9 GB with a default installation" (this is a guess, I don't know the exact number). Korin43 (talk) 21:07, 7 April 2009 (UTC)

Mac OS X.6 reports the MB and GB using the new SI units

With Mac OS X.6 out tomorrow, maybe it should be mentioned that the OS reports the size of files using the official SI units, so now advertised hard-drive space is the same as the one showing on your computer, cf http://www.macworld.com/article/142471/2009/08/snow_leopard_math.html —Preceding unsigned comment added by 68.65.162.254 (talk) 23:35, 27 August 2009 (UTC)

  • The effect is the same; Apple is now being more consistent with file-size and storage space so sizes match depending on what dialog box or window is reporting the value. They should have done this a long time ago; file-size is inherently a disk-storage issue and not a RAM issue. Computer manufacturers still advertise the memory (RAM) capacity of their computers in gigabytes and, in that context, it always means binary value—for Apple and all other major manufacturers of computers—when communicating to a general-interest audience. They’ve always consistently adhered to the principle that disk storage is measured in the decimal sense. Irregardless of the press release, as always, the term “gigabyte” can mean either 109 or it can mean 10243 in the real world. Usually, one won't go wrong by assuming the decimal meaning for disk-based storage and the binary meaning for transistor-based memory.

    It would be nice if Wikipedia could overcome the shortcomings of this ambiguity by promoting the adoption of unambiguous terminology such as the IEC's utter flop of a suggestion that the world use words like “kibibytes” and “gibibytes.” The world hasn't adopted it and Wikipedia, in order to communicate naturally and in the least confusing manner possible, must necessarily follow the practices of the real world. A half-dozen über-fans of the IEC prefixes (who were, unfortunately, also Wikipedians) tried for three years to promote their adoption by using them here in Wikipedia’s computer-related articles in an “Oh… didn’tcha know?”-fashion. The end result was three years of looking foolish and the world is still ignoring the IEC prefix-suggestion and is still using the conventional terms (and manages to deal with the inherent ambiguities without the world falling apart).

    Far too much hyperbole has transpired on Wikipedia on this issue. Just look towards modern, leading, computer-related periodicals for guidance on how to deal with the ambiguity (it is rarely a problem unless one is writing about a subject where it is truly necessary to split hairs on storage space) and go with the flow. Greg L (talk) 23:45, 18 February 2010 (UTC)

Sigh, what a mess!

This all looks like marketroids have won the day.

Computing storage capacities never have and never will be a decimal function. It can be dressed up as such but that just make things messier. 121.74.5.96 (talk) 01:58, 2 September 2009 (UTC)

That's why we have a new batch of prefixes.
Ki,Mi,Gi,...
Stop the ambiguity! Use units and prefixes correctly!
Thelennonorth (talk) 15:09, 11 September 2009 (UTC)
Please stop your advocacy comments, this is not a forum. Kbrose (talk) 15:41, 11 September 2009 (UTC)
All I see here is Wikipedia blatantly taking one side of an argument that is still ongoing.58.107.110.11 (talk) 03:55, 22 September 2009 (UTC)
I agree with the above. This is an ongoing issue and to take one side of an argument adds weight to that argument. It would be best to not take a side currently and make note of both.

The Biggest problem is, the campaign promoting the change keeps insisting that the measurement must be changed to fix the conflict with SI prefixes. Rather, the campaign should be insisting on the name change instead. This would solve the purists concerns about the so called ambiguity and clean up the mess that has arisen. And, bonus, fix the HDD marketing cheats to boot. Of course, saying GEE-GEE-BYTE, or GIG-EE-BYTE, might garner some weird looks for a while. Evanh (talk) 04:57, 20 October 2009 (UTC)

83.235.16.68 (talk) 00:14, 6 January 2010 (UTC)Gibibite my foot! Since computers appeared on earth a higher value was 1024 of the previous. One KB (read kilobyte) is 1024 bytes. KB means kilobyte not some kibi-whatever-byte!!! One GB is 1024 MB - it always was so. Now if hard disk vendors want to exaggerate their capacities fine, but to turn rules around is ridiculous. Western Digital may lobby next that a byte should be 7 bits... This article in my opinion is biased admitting as a fact very recent proposals without even providing dates of these SI proposals.

Actually, if you will look at either Binary prefix or Timeline of binary prefixes you will learn that disk drives have always been quoted using decimal prefixes, going back to long before personal computers existed... going back, in fact, to the very first hard drive (IBM 350 RAMAC - 5 million, meaning 5,000,000 characters capacity) and continuing forward with no exceptions that anyone has been able to find. Unlike main memory there is nothing in a hard drive that influences its capacity to be a power of 1024, nor even a small multiple of a power of 1024. Look in an old PC's BIOS for all the ATA drive types, with all the weird combinations of numbers of heads per cylinder, cylinders per surface, sectors per track. You will see that powers of two are the exception there, not the rule. I seem to recall early MFM drives having 17 sectors per track... And since hard drive makers got to each "threshold" before the RAM makers - e.g. we had gigabyte (109) hard drives long before we had gigabyte (230) RAM modules, or even total installed RAM - the claim that "it always was so" is mistaken. It was not always so for hard drives, quite the opposite in fact! This is not a matter of opinion, this is historical fact, rather thoroughly referenced at those two articles. Similarly this not a case of "HDD marketing cheats"; everyone in the HD market is following the same rules. The "hard drive size confusion" could be solved very easily by getting the OS vendors to display HD and file sizes the same way the HD makers do, and no need to use IEC prefixes there either. The latest version of Mac OS X has already done this, maybe Windows 8 will follow suit. Since there is no reason to compare HD capacity directly to RAM capacity there is really no problem with the OS using MB=220 bytes for RAM and GB=109 bytes for drives... you're getting what you're paying for in both cases. You just think you're not with the HD because the OS is using a different divisor. But that is not the drive maker's fault. Again, this is a matter of historical record: drive makers were using decimal prefixes long before OSs were doing anything but displaying HD and file sizes using exact numbers of bytes, with no prefixes or divisors at all. Later, OS writers said "let's divide by 1024 or 1,048,576, even though the drive makers divide by 1,000,000!" So why blame the HD makers? Jeh (talk) 08:41, 3 March 2010 (UTC)
What you've really proven there, even though it's extremely disc/tape centric, is that in the early days of formulating computing methods and definitions everyone used decimal powers of scale. After all, math is based around decimal. But after some use it was found that scaling in powers of 2 fits in to computing much better than powers of 10 do. So, they changed to the superior definition and method. Evanh (talk) 03:26, 20 March 2010 (UTC)
Powers of 2 only fits better for binary-addressed memory. It makes sense to size the memory that way because then all possible combinations of values of address lines map to a valid address. It even makes sense to quote the memory size that way because you then get a series of small numbers that are easy to remember, easy to add together in your head, etc. But there is no justification for extending this to types of storage or devices that have no "binary influence" in their sizing. Jeh (talk) 00:27, 29 May 2010 (UTC)
I'd say there's no justification not to extend that to file storage. Evanh (talk) 16:47, 10 June 2010 (UTC)
Except that the sole reason for applying it to binary-addressed memory just doesn't apply to file storage. Similarly timing crystals and ceramic resonators. If I buy a "10 MHz" crystal it is going to be 10,000,000 Hz whether it's going into a computer device or not. Jeh (talk) 21:28, 10 June 2010 (UTC)
Binary does apply to files. They are just as tied to computing as everything else digital, and that ain't gonna change. But even ignoring that, having two scales is still silly. Evanh (talk) 14:23, 14 June 2010 (UTC)
Nonsense. I can very easily have a file length of 512, or 1000, or 937 bytes. That something is "tied to computing" does not mean its size is best expressed as small multiples of powers of two. Jeh (talk) 21:47, 25 June 2010 (UTC)
So true, but not an argument for either side of the fence. Evanh (talk) 02:09, 26 June 2010 (UTC)
With regard to HDD sticker specs, it doesn't really matter how they are sold, so long as they are comparable. In that respect what they've done is not problem because they are all cheating in a consistent manner. What matters is how the HDD's are used. And that's where these SI prefix zealots have jumped in and messed things up. File sizing in decimal scales is nonsense, it doesn't match RAM usage nor does it match HDD block sizes and it's just plain silly to have two differing measurements for computing memory. Computing is a binary system. Binary scales is all that's needed. As someone just pointed out there isn't any value in even having an official decimal scale. Evanh (talk) 03:26, 20 March 2010 (UTC)
Why should file sizing match RAM usage? Except for cases where you map an entire file, entire files don't have to fit in RAM at once. For things like exe's and dll's the relationship between file size and required address space is not exact, anyway. Here's an example showing the value in decimal prefixes: Suppose I have a file that is 3,709,247,018 bytes long. Quick, how many gigabytes is that? By inspection you can say "3.7 GB" and you're correct for the SI meaning of GB. But you insist that it be called 3.45 GB. Can you do that in your head? Most people can't. Frankly, the fact that "computers use binary" is a red herring; we use decimal and the computers should conform to our convenience, not the other way around. Arguing that things like file sizes should be expressed using powers-of-1024 prefixes is only slightly less silly than arguing that they should be expressed in hexadecimal. Jeh (talk) 00:27, 29 May 2010 (UTC)
What matters is the allocating, measuring and reporting. It doesn't matter whether it is RAM or disc being used, they should be measured with the same scale. The alternative is silliness. Heh, I'd laugh if the Mac actually has two differing scales now. Evanh (talk) 16:47, 10 June 2010 (UTC)
Since neither HD makers nor RAM makers are going to change, we are stuck with two different scales. Given that, it is silliness to continue to use "GB" for both scales. That's why "GiB", etc., were invented. Jeh (talk) 21:28, 10 June 2010 (UTC)
If you want to talk about usefulness of having two scales of measure then that's one thing but using product labels as a definition is just broken. And I'd laugh if that were the reason for the two definitions - ie: HDD manufacturers were upset that no-one else used the decimal scale. Evanh (talk) 02:32, 15 June 2010 (UTC)
It would make very little sense to build a DIMM with, for example, 2,000,000,000 bytes capacity, as multiple such sticks could not be easily combined into a single block of memory with a contiguous span of byte addresses. Hence the DIMM is going to be, in this case, 2x10243 bytes. I am sure you agree with this so far.
The fact that chips are built to those sizes doesn't itself define a standard for measurement. Interestingly, though, memory chips constructed exactly on powers of 2 does derive from the same technical reason for using and measuring in powers of 2. Evanh (talk) 02:09, 26 June 2010 (UTC)
What you are missing is that that is the sum total of the influence of binary notation on memory sizes. That influence does not apply to any other component. In particular there is nothing about either the construction of a hard drive or the way it is interfaced to the host that "encourages" their sizes to be binary numbers with a whole lot of 0 bits at the end. The very first hard drive (IBM RAMAC) had a capacity of "five million characters", meaning 5,000,000. (And the characters were only six bits wide, not eight.) And the LBAs by which we address hard drives are just numbers - not binary, not decimal; they're just numbers. It is wrong to think of a number as "a binary number" just because it's stored in a computer. Jeh (talk) 21:47, 25 June 2010 (UTC)
Manufacturer labeling doesn't constitute a use nor define a standard. Heh, that number, the size of a block on a HDD, it ain't just any old number, that number is 512. It is a power of 2 and it always will be a power of 2. And for good binary reason. As everyone knows, the addressing mechanism of computer memory is done with physically digital signals. One bit per one trace on the board or chip, with each bit of the address bus adding another power of 2 to the address range. This strongly leads to a number of situations where powers 2 are a pure snug fit. It extends right into the OS and even application software at times. The basic example is RAM itself, if the RAM chip has space for all the addresses available on it's address lines then there is no holes in it's map which results in both the wiring and the managing of those addresses a whole lot simpler than if map had a hole in it. You just have to look at I/O address ranges to see what happens when there is lots of small groups of addresses for device control - pock marked like the moon's surface and address decoders become humongous. Further up the chain there is OS level memory management to contend with. It's heavily dependent on fixed page size and page boundaries for the above same decoding and matching compare circuits. And, funnily, swapping pages to disc comes into play here. But it's not just paging that is defining disc block sizes, DMA engines have the same love of powers of 2. Both the logical and physical addressing in digital systems fit with powers of 2 like a glove. Evanh (talk) 02:09, 26 June 2010 (UTC)
Computers are binary machines. I'm damn confident that ain't gonna change. Get use to it. The whole point of using KB,MB,GB,TB symbols in the first place is to make the quantities more human readable ... and from that perspective the exact scales aren't all that important but it doesn't change the fact that GB is 2^30.  :) Evanh (talk) 16:47, 10 June 2010 (UTC)
The fact that "computers are binary machines" is irrelevant to how things should be displayed for people to read. It is also something of a canard, since it is only a description of the internal representation of numbers, and of course the circuits that work on them. But the fundamental operations like addition and multiplication are not "binary" or "decimal". The results (i.e. where you would end up on a number line) are the same either way. The only thing in this discussion that led to corruption of KB, MB, and GB to mean powers of 1024 is the fact that it is convenient to build and interface main memory in such sizes. The fact that the computer uses a binary internal representation for integers is irrelevant. Jeh (talk) 21:28, 10 June 2010 (UTC)
It would be silly to have two scales for the same measurement two devices. And it very much is the same measurement. Computers being binary is very much not irrelevant, it is how the memories get used. Like I've already said, they would never have adopted binary scaling if it wasn't more convenient. Evanh (talk) 14:23, 14 June 2010 (UTC)
Yes, it is silly, but that is the situation: solid state memory like DIMMs are quoted in binary gigabytes, and hard drives, bus speeds, network speeds, etc., are quoted in decimal gigabytes. I'm fairly sure that none of those are going to change anytime soon. (And that is exactly the argument for moving to IEC prefixes for binary gigabytes: the same term, such as "gigabyte", should not be used to mean two different things.) So it would be incorrect for this article to state that gigabyte always means 10243. Jeh (talk) 21:47, 25 June 2010 (UTC)
Again, you are throwing in wrong examples by confusing use with product labels (In use, hard drives and files are both measured with binary scales) and examples that are both wrong and irrelevant (Data speeds don't match their labels and are just fuzzy anyway from a memory usage point of view). Evanh (talk) 02:09, 26 June 2010 (UTC)
There is no particular reason other than convention that disk blocks are 512 bytes, either. In the early days of hard drives they were very often something else, as the Binary prefix article describes. Jeh (talk) 00:27, 29 May 2010 (UTC)
I beg to differ. If it is convention then that convention is very much based on a sound foundation. Otherwise they would never have changed from decimal allocating, decimal measuring and decimal meaning of the symbols in the first place. Evanh (talk) 16:47, 10 June 2010 (UTC)
Er, the hard drive makers never did change from the decimal meaning of the symbols, so I'm not sure what your point is. Jeh (talk) 21:28, 10 June 2010 (UTC)
I'm talking about the Timeline of binary prefixes you so kindly provided, where it shows the tread from decimal to binary scaling. And I'll repeat, what matters is use and why it is used, not some marketing labels. This is where Wikipedia is manipulating the subject. Wikipedia is changing our use. Wikipedia is leading the definition. Evanh (talk) 14:23, 14 June 2010 (UTC)
The trend you speak of is only for random-access memory. Not for hard drives. Yes, what matters is "use." The hard drive, communications, etc., segments of the industry use "giga" in the decimal sense, and have done so since long before Wikipedia existed, so how you can claim that "Wikipedia is leading the definition" is beyond me. Jeh (talk) 21:47, 25 June 2010 (UTC)
Wrong and irrelevant. Product labels don't constitute a use nor define a standard. Evanh (talk) 02:09, 26 June 2010 (UTC)
On a tangent, data speeds, contrary to some assertions, don't even use decimal scales. Instead they usually use some mishmash that matches no scale. After all, the devices have to perform their phase locking to match whatever the exact data rate is. Frequency counters will give the answer in decimal, I can beat in a lot of cases these frequency readings won't match up with the so called data-rate spec. Evanh (talk) 03:26, 20 March 2010 (UTC)
Incorrect. The receiver in many cases performs as you state, however the transmitter is almost always driven by a crystal-derived clock and, I assure you, is very much on spec. The channel bit rate might not match the spec'd bit rate due to various modulation schemes, error correction codes, etc., but that's a different question, and is completely unrelated to the 1024-vs-1000 issue. Jeh (talk) 00:27, 29 May 2010 (UTC)
Yet data-rates are still used as a reference for people stating that binary scaling is wrong. How sad is that? Evanh (talk) 16:47, 10 June 2010 (UTC)
Yet "everything in a computer is binary" is still being used as an argument by people stating that decimal scaling is wrong. How sad is that? The underlying point is still valid: When I buy a "10 MHz" crystal or ceramic resonator it is 10,000,000 Hz. When do you think WWVB is going to change from 60,000 Hz to 61,440 Hz? They use computer-controlled transmitters after all, and "everything in a computer is binary", right? Jeh (talk) 21:28, 10 June 2010 (UTC)
The inaccuracies are exactly my point. The label says 2 Mb/s but when you get right down to it the crystal will not generate a 2000000 b/s or 2097152 b/s stream, it'll be something else altogether, and yes it'll be specified as that "something else" when you look at the technical data. So, the marketing label is not based on a standard, it's just rounded to something close that is easy to read. Evanh (talk) 14:23, 14 June 2010 (UTC)
Evanh, you're messing things up again. In the specs sheets you will find reference values AND uncertainties, maximum errors, deviations specified in % or absolute units. It's NOT specified as that something else to which you refer too. Please do a little efford to be somewhat correct (hint: research) in your statements. This has nothing to do with the subject of the article. Do NOT use this TALK PAGE as a FORUM! Thelennonorth (talk) 12:51, 25 June 2010 (UTC)
"The label says ..." is the subject of this data-rate sub thread. The exact details are another matter. It is just an example of a typical red-herring that SI zealots like to throwing around to suggest that decimal scaling might have some validity. Reality is, the measure of data-rates has always been a bit fuzzy; it's accuracy doesn't mattered in terms of memory usage - and memory capacity/usage is what this is really about. Evanh (talk) 14:39, 25 June 2010 (UTC)
Uh, I'm well aware of what spec sheets say, having spent part of my career selecting components based on them. But those little tolerances are far, far smaller than the difference between 10002 and 10242. Regardless, when someone talks about buying "100 megabit ethernet" equipment they are buying stuff that the manufacturers think runs at 100,000,000 bits/second, not 100x1024x1024. Jeh (talk) 21:47, 25 June 2010 (UTC)
Yes, we can all give examples, I've given one already, but, my point is, ... data-rate labels or even technical accuracy of the device labeling doesn't mattered in terms of memory usage - and memory capacity/usage is what this is really about. Evanh (talk) 10:35, 26 June 2010 (UTC) Clarified: Evanh (talk) 02:10, 27 August 2010 (UTC)

But, fine - to get back to discussing the article. I think the lede is misleading, as I stated in the section below, "Lede is misleading".

Jeh (talk) 21:47, 25 June 2010 (UTC)

The whole article is intentionally misleading. Evanh (talk) 02:04, 27 August 2010 (UTC)

Datasheets and IEC standards aren't good friends.

Hello everybody. I want to explain why NOT the standards like "mibi" can make the life easier. In the datasheets' pages (the sheets where the technical data about microchips, generally and generically) you can search "megabyte" and you would find things like this:

M29F800AT

M29F800AB

8 Mbit (1Mb x8 or 512Kb x16, Boot Block) Single Supply Flash Memory

You can search it without doubt, that chip is an 8 Mbit (1 Megabyte), like the Jedec standards. What's wrong with it? Why can't understand "megabyte" in chips is 2^20? If we put (hypothetically) 1024 chips like this and a microcontrolled system who recognizes it like a hard drive, can't we have 1 Gigabyte? Confusing.

Also, the RAM "megabytes" and "gigabytes" who have the graphical boards are using this "tricky way" to confuse users too? This needs to could be verifiable, don't think you?

Conclusion: if "mibi" would be the solution, there would be problems with the electrical engineers; in the university (at least the ones i know) doesn't teach this standard, making this confusion grow. Jordaker (talk) 23:29, 22 April 2010 (UTC)

If we put (hypothetically) 1024 chips like this and a microcontrolled system who recognizes it like a hard drive, can't we have 1 Gigabyte? Not in terms of advertised capacity, because flash memory devices need a considerable amount of "spare" space for wear leveling and similar functions. In fact a flash memory device marketed as "16 GB" will have a user-accessible capacity of 16,000,000,000 bytes, just like a "16 GB" hard drive. Yes, the number before the GB is a power of two, nevertheless the "GB" means 1,000,000,000 bytes. Yes, it's confusing, but most of the consumer confusion is because the most popular OS will show this as "14.9 GB". If the OS used SI prefixes to match the SSD packaging it would be less confusing for the user. Jeh (talk) 17:09, 3 June 2010 (UTC)
Reality is the raw flash chip is exactly 16 GB, not one bit greater nor smaller, so it really is binary scale for this one. But, again like with HDDs, the labeling of 16 GB is only for purchasing decisions, not actual use nor expected usable capacity. I have a 16 GB SDHC card plugged in here ... unformatted capacity is reported as 16,531,849,216 bytes. A full 16 GB == 17,179,869,184 bytes. Interestingly, the partition is marked as type 6 (FAT16) but I guess that doesn't worry modern OSes. Evanh (talk) 18:38, 10 June 2010 (UTC)

With certain unnamed cellular phone companies now limiting usage to 5 GB

Can someone put how many 5 gb is in simple terms? This is so very confusing. —Preceding unsigned comment added by 166.183.210.251 (talk) 13:12, 3 June 2010 (UTC)

I haven't seen any "fine print" the way the HD makers use, but I expect that the phone companies are meaning 5,000,000,000 bytes. It is usual for network transfers to be cited using the SI (decimal) prefixes. Furthermore it's more beneficial for the phone companies to allow that number rather than 5,368,709,120. It'll be really confusing for the users if their phones use JEDEC-style binary prefixes; the phone could be saying you've only used 4.99 GB and your bill could say you've gone over. Jeh (talk) 17:10, 3 June 2010 (UTC)
I can assure you it's common for both the usage plans and the usage meter to be based on binary scaling. After all, it's shown as GBytes not Bytes and thereby truncating all the extra digits. Ask the provider is my advice. Evanh (talk) 16:51, 10 June 2010 (UTC)
Truncation would occur whether their "GB" means 10243 bytes or 10003 bytes; that's not any sort of evidence. My bet is that they'll use the decimal meaning as it's more favorable to them. Jeh (talk) 21:46, 10 June 2010 (UTC)
I can assure you it's common for both the usage plans and the usage meter to be based on binary scaling. Evanh (talk) 13:13, 14 June 2010 (UTC)
I'm afraid your assurance does not constitute a reliable source. Jeh (talk) 21:09, 25 June 2010 (UTC)

How can I monitor how many gb are being downloaded? how do I guage what a download is...video or game or pictures? (are they counted? user:germaine —Preceding unsigned comment added by 216.218.29.254 (talk) 03:01, 10 February 2011 (UTC)

Redundant info here, better article is elsewhere

The "consumer confusion" section here is redundant with the longer, more detailed, more general Binary prefix article.

I suggest reducing this article to the bare minimum description of the two values of gigabyte and just referring to Binary prefix for everything else. Jeh (talk) 17:03, 3 June 2010 (UTC)

Lede is misleading

The IEC binary prefixes have not seen anything even close to widespread adoption. The Binary prefix article gives numerous examples where the IEC prefixes are not being used. Even the IEEE, which has approved the IEC's recommendation as an actual standard, does not require their use in their own journals Computer or Spectrum. No examples have been found of magazines printed in English that use the IEC prefixes. Their use in software is limited to a few versions of Linux and not quite 20 software packages.

However the lede of this article gives the impression that the IEC prefixes are on a more or less equal footing with the use of SI prefixes with binary meanings (that is, the use of JEDEC prefixes).

The lede should therefore be changed to reflect real-world practice.

Jeh (talk) 17:22, 3 June 2010 (UTC)

LOL, that's because everyone uses GB == 2^30 and so has no need of the GiB symbol. Evanh (talk) 16:51, 10 June 2010 (UTC)
That's a different question. There are two different issues here. On the one hand, people are objecting that the lede says that "GB" can mean 1,000,000,000 as well as 1,073,741,824; the fact is that the hard drive industry does use it that way (as, now, Mac OS X does for file sizes), so that fact can't be ignored. What I'm objecting to here is that the lede also seems to imply that the IEC prefixes are widely accepted and used, and of course they're not.
I would also agree that the lede gives an inappropriate amount of weight to the decimal meaning of gigabyte.
Actually, I think all these little articles should be collapsed to just a paragraph or so and then provide a "See also" to Binary prefixes. That article has passed muster among a couple of the proponents of the current WP policy (which discourages the use of IEC prefixes on WP, except in special circumstances) and also among IEC proponents. Jeh (talk) 21:35, 10 June 2010 (UTC)
I think you'll find I'm spot on. How about a quote? Evanh (talk) 13:34, 14 June 2010 (UTC)
The hard drive makers absolutely do not use GB=230 and never have, so your claim of "everyone" is clearly wrong. Jeh (talk) 21:10, 25 June 2010 (UTC)
Irrelevant. Manufacturer labeling doesn't constitute a use nor define a standard. Evanh (talk) 02:00, 26 June 2010 (UTC)
When a manufacturer applies a label that quotes drive capacity in GB, and then says "1 GB = 1,000,000,000 bytes", that "doesn't constitute a use"??? What is it then? In the sentence "Hard drive manufacturers use GB to mean 1,000,000,000 bytes." how would you change the word "use" to fit with your understanding of English vocabulary? Jeh (talk) 23:39, 14 July 2011 (UTC)

It's not just misleading, it's highly POV, pushing the POV that the decimal definition is correct and the binary definition is just a leftover secondary use.76.226.219.115 (talk) 23:22, 14 July 2011 (UTC)

According to SI, using GB to mean 10243 bytes is not just secondary, it should not be done at all. Now it is true that JEDEC's documents do support the use of GB=10243 bytes. But JEDEC, as a trade organization, does not have the same stature as SI; JEDEC is just a bunch of manufacturers agreeing with each other. So if (per Evanh above) hard drive makers' absolutely universal usage of GB=1,000,000,000 bytes doesn't constitute a standard, then by the same criterion, neither does anything from JEDEC; it's just a bunch of manufacturers documenting how they're going to do their labeling. Jeh (talk) 23:39, 14 July 2011 (UTC)

MiB and GiB used by UK Gov

Oooo! Look!! First time I've seen MiB and GiB used outside of this place. UK government site [5]. Worth a mention? RatSplat ooo 10:50, 4 June 2010 (UTC)

The power of the pen (or how a few arrogant Wikipedia editors can change the meaning of a word)

When I first came across the Gigabyte article about six months ago and saw the reference to the term gibibyte I thought, "that's crazy, no one uses that term, someone must have vandalized the page." However, I did a Google search for gibibyte and found a few hundred references at that time and concluded that the term actually was in limited use. Today (10th June, 2010), if you use Google to search for gibibyte you get over 300,000 hits. I would contend that this increase is largely due to the Gigabyte article in Wikipedia.

I am not saying that the term is not subject to confusion, nor that various standards bodies have the right to define the term however they wish, only that the term has a commonly accepted definition in the industry, as indicated by many sources, including The American Heritage® Dictionary of the English Language, Fourth Edition, Copyright © 2009 by Houghton Mifflin Company, which defines gigabyte as: A unit of computer memory or data storage capacity equal to 1,024 megabytes (2^30 bytes).

129.35.87.198 (talk) 10:32, 10 June 2010 (UTC)

I guess "the industry" to you does not include the hard drive industry? Hard drive manufacturers were using "gigabyte" for 1,000,000,000 bytes long before Wikipedia existed, and in fact even before RAM was available in gigabyte sizes. The same was true by the way at the megabyte threshold too; the HD makers got there with the decimal meaning long before memory makers did. See Binary prefixes for the facts... including references to dictionaries that correctly give both meanings. Jeh (talk) 11:19, 10 June 2010 (UTC)
You do realise that the HDD manufacturers labeling is not even slightly relevant, right? If that's the only argument for decimal scaling then Wikipedia has most surely got it wrong. Should just be a side note to say that hard drives are labeled as such. Evanh (talk) 16:50, 10 June 2010 (UTC)
Sorry but WP practice is to follow, not lead, its references. In this case the references are several tens of millions (at least) hard drive boxes, advertisements, and industry technical literature. And I will once again note that the hard drive had gigabyte-sized (109 bytes) products on the shelves long before the RAM industry did. You can't just wave your hand and say "that's irrelevant."
Irrelevant. Nobody's labels define a standard. Wikipedia is very much pushing the interpretation that decimal scaling has been in common use. It hasn't been common, even for HDDs, for many decades. To the point that there never was a standard for decimal scaling before SI defined one. All the actual industry bodies used GB == 2^30. Evanh (talk) 13:45, 14 June 2010 (UTC)
WP is not here solely to document "a standard". Real world usage, particularly by an entire industry segment, is very much relevant. Decimal scaling is not just "common" for HDDs, it is for all practical purposes the only way that HDDs have ever been sold, and "gigabyte"-sized (10003) hard drives existed long before gigabyte-sized RAM. In fact the only "industry bodies" that support the "binary gigabyte" (and megabyte, and kilobyte) are JEDEC and others in the semiconductory memory field. Jeh (talk) 21:06, 25 June 2010 (UTC)
Good one. The industry uses one way while the manufacturers label another way but somehow you've linked them together. Evanh (talk) 01:58, 26 June 2010 (UTC)
Hard drive manufacturers are part of "the industry" by any possibly reasonable interpretation of the terms. Jeh (talk) 21:24, 12 August 2011 (UTC)
Incidently the last six month period you cite is not really the right period to look at. The use of IEC prefixes on WP is severely curtailed by WP:MOSNUM, almost all of the articles on WP that are using the IEC prefixes are doing so contrary to that policy, and a few editors are busily going around editing those pages to bring them into compliance. No, the heyday of IEC prefixes on WP was a couple of years ago, as for a while, the routine use of IEC prefixes was supported by MOSNUM. If you want you can read the MOSNUM talk page archives for the whole bloody history of those fights (there were several). Jeh (talk) 21:44, 10 June 2010 (UTC)
No idea what you are talking about here other than you are again ignoring all the Wikipedia history of GB == 2^30. If you are talking about something in the opening post, then that wasn't me. Evanh (talk) 13:44, 14 June 2010 (UTC)
Stupid wikipedier Evanh (talk), I can do what you do too.
@ Evanh (talk) Quote with changes for decimal: Irrelevant. Nobody's labels define a standard. Wikipedia is very much pushing the interpretation that binary scaling has been in common use. It hasn't been common, even for RAMs, for many decades. To the point that there never was a standard for binary scaling before SI defined one. All the actual industry bodies used GB == 10^3 and GB == 2^30. Most of the industry uses the SI units for everything. Including MOST parts of the ICT-industry! Network card transfer rate, processor speeds all use the k = 1000, M = 10^6,... This overzealous clinging on keeping things just because they are partly present right NOW or in YOUR PAST doesn't make it right. Most people think and read those units (in the consumer space) as multiples of ten.
You really should be reasonable about what you say. You are a biased, fanatic, zealous douchebag. The world is slowly shifting to those definitions, it just doesn't happen overnight/in the blink of an eye. The only high-profile OS maker that doesn't uses these prefixes is Microsoft with Windows. Linux and Mac are or already use the new units.
What's all this dumpshit about most google hits come from wikipedia? I get 188000 results. The first page of 10 results is in wikipedia but that's it. Some pages have a reference to wikipedia in one of the ten pages. Page 8 or 10 or 15 does not have any wikipedia webpage reference in it. Speaking about dictionaries:
gibibyte meaning: meaning of gibibyte from Bee English Dictionary
gibibyte meaning(s) Add to My List. (n) a unit of information equal to 1024 mebibytes or 2^30 (1073741824) bytes. »» Synonym(s) ...
How many Gigabytes is a gibibyte? - Yahoo! Answers
25 Jan 2008 ... Most people will use the term gigabyte that can fall under 2 definitions: 1. A gigabyte is 1000000000 bytes 2. A gibibyte is 1073741824 bytes ...
answers.yahoo.com/.../index?... - United States - In cache - Comparable
www.beedictionary.com/meaning/gibibyte - In cache
Thelennonorth (talk) 14:13, 17 June 2010 (UTC)
Who's being a zealot? Evanh (talk) 23:55, 17 June 2010 (UTC)

Proposed Gibibyte merge

The Gibibyte article is in essence a stub with lots of links via GiB, please insert it here. For a discussion Wikipedia talk:COMPUNITS might be better than this talk page. –89.204.153.166 (talk) 12:02, 3 August 2011 (UTC)

I think ALL of the per-unit articles (gigabyte, gibibyte, GB, GiB, iterate for K, M, T, etc.) should be stubs with a "see also" to Binary prefix. Not simply redirects - if all one wants is a quick definition, one should not have to scan through the whole Binary prefix article to find it. But in terms of greater discussion I do not see anything that the per-unit articles are doing that shouldn't be said in Binary prefix. Jeh (talk) 20:24, 3 August 2011 (UTC)
strongly disagree with a merge. All the units should be kept separate. Jenova20 15:18, 12 August 2011 (UTC)
As I said above, we should retain separate "stub" articles giving the basic definitions and so on. But as it is there is a great deal of duplication and overlap between Gigabyte, Megabyte, etc., and Binary prefix. Almost all of it having to do with the binary vs. decimal interpretation, citing examples of use, court cases, screen shots of OS displays, etc. This is wasteful of editors' and readers' time. Jeh (talk) 21:21, 12 August 2011 (UTC)
Strongly disagree with a merge. Each different counting term should have a direct FULL page of info on each one, so that if a user wants a quick snapshot of understanding, they can thus look at the exact SEPARATE page for the term they are trying to get info on, without having to find the exact info on a more general page. The grouped term pages are there IN ADDITION to give a GROUPED overview of similar terms as a whole, and often in a wider context as/when needed. These are two subtle yet distinct functions. Jimthing (talk) 10:30, 22 August 2011 (UTC)

Pronunciation (jiga vs giga) again

There is an old discussion about proper pronunciation on this page which needs to be revisited. Pronunciation currently in the article is Giga and that pronunciation is wrong, see "http://www.merriam-webster.com/dictionary/giga-." Both pronunciations are listed, but Jiga is listed first which means that it is preferred. Dictionaries often list mispronunciations when they become common, see http://www.merriam-webster.com/dictionary/nuclear. Just because a mispronunciation is common does not mean it should be perpetuated here, or at the very least both pronunciations should be listed in the correct order: Jiga, Giga.Jarhed (talk) 01:05, 9 August 2011 (UTC)

I for one have absolutely never heard it pronounced "jiga", outside of Back to the Future, and not even then for "gigabyte." Jeh (talk) 08:59, 9 August 2011 (UTC)
I'm not sure what you mean by your comment. Are you claiming that your recollection is sufficient to trump reliable sources?Jarhed (talk) 02:06, 10 August 2011 (UTC)
It's certainly enough to motivate research beyond one source, singular, which is all you provided. Particularly as Merriam-Webster is known for not being particularly rigorous (last I checked they even allowed "irregardless" as a word, albeit "nonstandard"). Cambridge Dictionaries Online, for example, says it is "giga" (hard G) for both UK and US. Oxford Online only lists one pronunciation, with a hard G (but only as an "entry from World dictionary", so it's not really the OED). Tell you what - why don't you look here and take a count of how many list which pronunciation(s)? We need five counts: Only hard g; only soft g; both, but hard g preferred; both, but soft g preferred; and no pronunciation given. The total should be 44. While you're looking at all those entries it would also be worthwhile to make similar totals for 109 vs. 230 meanings. Jeh (talk) 05:00, 10 August 2011 (UTC)
I would be delighted to perform this work if I thought it would help, but I'm not sure what a survey of online dictionaries would obtain. The Wikipedia:Manual of Style (pronunciation) says that an option for providing pronunciation guidance is to link to the corresponding Wiktionary entry so that is what I recommend. I note with amusement that Wiktionary only provides the Jiga pronunciation for US English, so I presume that this is yet another example of American perversity. Additionally, I think that the Wiktionary entry handles the binary/decimal issue neatly and succinctly. Cheers!Jarhed (talk) 16:52, 10 August 2011 (UTC)
My point is that there is clearly a difference in what the "reliable sources" claim. So before there is a case for changing the article to prefer "jiga", a "head count" needs to be made of them. btw, I've traveled quite a bit in the UK for work in the computer field, talked with a lot of people in the field, and I haven't heard it pronounced with a soft G there either... so I don't think it's just "American perversity". Jeh (talk) 19:44, 10 August 2011 (UTC)

A little history is useful. Engineers who used the terms gigawatt, gigajoule, gigahertz, etc. years ago were using the "j" pronunciation from a few hundred years of tradition differentiating Greek pronunciation of "g". Classically that is correct. Modern usage in bytes apparently came from computer geeks with no outside knowledge of advanced physics except encountered as text. Spoken in academia traditionally all "giga" was "j." — Preceding unsigned comment added by 74.81.189.18 (talk) 14:30, 28 February 2012 (UTC)

As I stated in the previous thread, it's from the Ancient Greek, gigas, or giant. We don't say guyant. See "http://en.wiktionary.org/wiki/gigas". Your personal experience doesn't count on Wikipedia. The references and evidence support the jigas pronunciation. John187 (talk) 01:50, 8 August 2013 (UTC)

What is vs. what "should" be

I think we're doing a disservice to the readers of this (and other powers of 2 vs. powers of 10 pages) by insisting that one measurement is exclusive of the other. A reader who comes to this page simply wants to know how large a "gigabyte" is, because they're seeing the term used somewhere, and they need to know what it means in that case.

Computer scientists can insist that 1 GB is 1024 MB. Hard drive manufacturers can insist that 1 GB is only 1000 MB. SI zealots can argue that if you're going to say 1000 MB is 1 GB, then you must be consistent and say that 1 MB is 1000 KB, and furthermore that 1 KB is 1000 B. It's GB where the definition diverges, and the special case of GB needs to be addressed, but first the reader needs a quick overview of what they're likely to be looking at.

They are likely to be in a situation where a "gigabyte" is defined as either 1000 MB, or is defined as 1024 MB. They are practically never going to be in a situation that defines "gigabyte" as 1 000 000 000 B. Even unscrupulous retailers/providers never assert that 1 KB is only 1000 B. We're only misleading readers by suggesting that a supplier who uses 1000 MB as 1 GB was also considering 1000 KB to be 1 MB.

They're purchasing a hard drive, buying internet access, comparing data capacity/throughput. They're wondering why their 500 GB disk is showing up as less when their OS reports it. We need to provide a usable explanation in the opening of the article, before we get into comparing powers of two and SI units. This page needs to be overhauled with the reader's immediate need in mind, followed later by a thorough discussion of the details. gabe (talk) 08:41, 31 January 2013 (UTC)

I don't know why you say "it's GB where the definition diverges". GB is not a special case. To hard drive manufacturers, 1 MB is 1,000,000 bytes (HDs did not always come in sizes measured in GB) while to memory manufacturers, 1 MB is 1024x1024 bytes. There were never HDs measured in KB (the very first one, the IBM RAMAC 350, stored five million (5,000,000) six-bit characters) but decimal prefixes are used in e.g. communication speeds: A "56 kbit/s" modem is running at 56,000 bit/s, not 57,344. Whereas we did used to have RAM sized in KB, and a KB of RAM meant 1024 bytes. This is not just "SI zealots" talking, it's the same equipment makers who are using GB both ways today. Why do you think GB is a special case? Have you read the Binary prefix article? Jeh (talk) 04:23, 1 February 2013 (UTC)
I'm here because I was checking a link from a web page that was using this Wikipedia article to explain the dual usage of the unit GB. The link was put in place many years ago, and at that time, the Wikipedia article did an admirable job of explaining the dual usage. However, now I am surprised and disappointed to find what I think reads like advocacy for 1000^3. I think flouts Wikipedia's Neutral Viewpoint guideline. I would prefer that Wikipedia present both sides, and leave the debate resolution to other fora, to preserve Wikipedia's image as a reference, not a battleground. If there's a battle, report it, don't take sides or allow sides to be taken. I'm agreeing with Snorgy and many others above that the current article is spun toward 1000^3. In my work (telecom and computer science), we use both definitions and thus are sensitive to this: data quantities use 1024^3 and data rates use 1000^3. Both are called 'gigabytes' (I've not heard anyone say gibabyte). We take care to define them. We use GB and GiB. Yes this is a nuisance, but so it goes. We were using a link to Wikipedia to help with the definition; reluctantly we'll be removing that for now but will watch hopefully for improvements.— Preceding unsigned comment added by 70.26.10.117 (talk) 15:20, 15 February 2013 (UTC)
I agree that the article is too biased toward the SI gigabyte. May I suggest that until it is fixed, you consider linking to the Binary prefix article instead? It is much longer, but I think you'll find it much more neutral. Jeh (talk) 03:36, 18 February 2013 (UTC)
However, the article must continue to note the fact that the first use of "gigabyte" in product labeling was on a hard drive - since there were gigabyte-sized HDDs long before RAM modules or even total installed RAM started to approach that quantity; hard drives really did get there first. The same is true for megabytes. Jeh (talk) 06:21, 18 February 2013 (UTC)
A biased article might state that there is only one definition, or that there is one "preferred" definition. This one does not do that, staying instead that there are two definitions, stating in which circumstances each is preferred. I agree the article can be improved, but it seems neutral to me. Where is the bias? Dondervogel 2 (talk) 17:13, 19 May 2013 (UTC)

Page Reorganization

There is a very lengthy and historical debate on this talk page regard the proper accepted definition of Gigabyte and the alternative Gibibyte. It has been expressed that this page read with a bias towards the marketing 10^9 definition pushing for the adoption of the Gibibyte term however such bias was quickly refuted by those who support this nomenclature.

To appease both sides, I have simply reorganized the page to promote the historical and more widely accepted definition of 2^30 to appear above the promoted definition of 10^9. Beyond swapping the order of 2 sets of paragraphs, I also adjusted a few adjectives to remove some of the anti binary bias that was felt. Beyond that, no changes were made.

This I believe will help clear up the confusion felt when one finds this page and finds an unknown definition while still directing the reader to the promoted marketing SI definition and alternative GiB nomenclature. — Preceding unsigned comment added by Drumz0rz (talkcontribs) 21:10, 6 January 2014 (UTC)

How is the binary usage of gigabyte "more widely accepted" than the decimal? Hard drives use the decimal meaning, and got to gigabyte-scale capacities long before RAM did (just as they got to megabytes first). And as far as I can tell, the binary usage is only used for internal semiconductor memory such as RAM and CPU cache, and for Windows' and some other OS's reporting of hard drive and file sizes. The decimal usage appears far more often: Actual hard drive products, USB flash drives, tape media, most optical media other than CD, transfer speeds (Ethernet, internal buses, etc.). Jeh (talk) 21:20, 6 January 2014 (UTC)
The decimal use of "gigabyte" was in use long before it was corrupted by Microsoft. Dondervogel 2 (talk) 22:21, 6 January 2014 (UTC)
Just because marketing companies have agreed to distort the definition of a Gigabyte does not mean the definition should be changed. As this long talk history implies, there have been numerous lawsuits against these companies for misrepresentation and they have all rules against the companies using base 10. If the accepted definition of a Gigabyte was actually 10^9, then there wouldn't be a legal need for a disclaimer on every product stating that they are using 1GB = 1,000,000,000GB.
It would be like if I decided to sell you a car that was yellow, but I called the paint color "Orange" and put a disclaimer that "Orange = RGB(255,255,0)" Drumz0rz (talk) 01:57, 7 January 2014 (UTC)
No, it is not like that at all. It is more like the hard drive makers started making HDs with capacities like "40 MB", meaning 40,000,000 bytes, and then sometime later the RAM makers came out with the first Mib-sized chips and started using MB in the binary sense. The RAM manufacturers are the ones who made the mistake. The same thing happened with GB. The lawsuits you reference are irrelevant to this discussion, and in any case the HD makers are not at all precluded from continuing to use base 10. Nor is there any controversy whatsoever over a "100 megabit" Ethernet link running at 100,000,000 bits/second, or a "1.5 Gbit/sec" SATA connection running at 1,500,000,000 bits/second, etc.
In any case there is clearly no consensus for your change, so please do not insist on it by continuing to edit-war on the article page. Per WP:BRD, the first revert is supposed to be followed by discussion, not another revert. Jeh (talk) 03:11, 7 January 2014 (UTC)
The use of the standards-based definition of the gigabyte has nothing to do with marketing, no matter how editor Drumz0rz and others want to twist it to resist the adoption of unambiguous units in computing. The history of definition especially for disk storage is clear enough and the metric definitions are common place today. The article has been stable in this form, having finally achieved some consensus among contributors. No need to revisit all this. Kbrose (talk) 04:03, 7 January 2014 (UTC)
I suppose there will be these kinds of refuseniks for many years to come, unnecessarily delaying accurate, unambiguous units that can be easily explained to newcomers without the conflicting viewpoints that have plagued this issue. The general computer-using public is increasingly uneducated in terms of technical aspects of the workings of digital devices, and having this ambiguity still is rather astounding. But historically, it has always taken a long time for new units to establish firm grounds, as can be seen that the US is still using obsolete units in daily life, while the technical and trade professions have long adopted metric measures. The train for uniform metric interpretation of units of information, however, has left the station, it is only a matter of time until it is practiced throughout the industry. The common argument of refuseniks is that the general public doesn't know the new units and would be confused to find them on WP. Well, they will probably be confused in any case, if they don't know any of the units and aren't taught properly. That's what an encyclopedia is for, to present new information. It's a bogus argument, trying to install fear of alienating someone, and so progress is very slow in these matters. Some of the most popular computer software already uses metric units and nobody is mounting lawsuits about it. Kbrose (talk) 23:17, 7 January 2014 (UTC)

Gigabyte, the ReBoot character

@Dark Liberty: removed "the cartoon character|ReBoot" from the About template in the lede, proclaiming "On the article, there is no reference to the character."

Uh, but, there was, four references in fact, as a simple text search of [6] will confirm. I reverted DL's change.

DL re-reverted, changing his reason to "Sorry, it's not on the major characters list. Wikipedia is not a soapbox."

Um, yes, the name "Gigabyte" did appear in the "major characters list", within the "Megabyte" entry, and was mentioned twice elsewhere in the article; two, the original "About" entry did not mention nor seem to require the word "major". What is at issue is whether or not the character name would be a likely search target, and that does not require that the character be noted in any particular list. And three, I have no idea what was "soapboxing" about the entry as it was.

(Gigabyte was a union of two characters who are itemized in the list, and is described that way in the list. So regardless of whether Gigabyte has his own bullet-point, it's pretty tough to argue that he wasn't a major character.)

Please note that re-reverting after a revert (i.e. insisting on one's edit after a revert shows that someone disagrees with you) is contrary to WP:EW: "When disagreement becomes apparent, one, both, or all participants should cease warring and discuss the issue on the talk page, or seek help at appropriate venues." See also WP:BRD. There is no second "R" before the "D".

Since the About template had been as it was for quite some time, thus establishing consensus, we are going to need something more than DL's lone accusation of "soapboxing", whatever that means, to change it. I invite DL to provide a better argument, one that is not trivially refuted as contrary to text that is glowing brightly on my screen and that is also consistent with WP policy and guidelines... particularly as regards to WP:CONSENSUS.

In the meantime, I have edited the ReBoot article to make Gigabyte's name in the "major characters list" more prominent, since it was apparently easy to miss before. Jeh (talk) 10:15, 1 September 2014 (UTC)

@Dark Liberty: has not responded here in two days, and deleted without comment the notification of this discussion from his talk page. If there is no other objection I'm restoring the "about" template. Jeh (talk) 17:13, 3 September 2014 (UTC)
Megabyte transforms into Gigabyte in the final episode and doesn't achieve the notability clause. Wikipedia is not a place where you place everything and a special form of a character deserving its own disambigution page. Admins and editors alike will not approve that each and every form of a character deserves its own disambigution. Thank you, though, for your contributions to the other article.
Gigabyte (which is actually really Megabyte) belongs on a Wikia article and not on Wikipedia. there was a discussion last week in the WMF about not including everything, keeping articles clean, and informative. Gigabyte is a reference, and when people read this article they are looking for computer-science-related information on the term. Also, Gigabyte already shows up when you search ReBoot.
If all readers are in agreement that we should include a disambiguation for every other name a character has, with silence being consent, then I will edit the article Kilobyte and Hexadecimal, and every other article that exists on Wikipedia and place ReBoot references there as well. per WP:Consensus, I shall hope that we are in agreement that the current About template is Ok.
Have a nice day, Dark Liberty (talk) 17:52, 3 September 2014 (UTC)
Fair enough. You know, if you'd just explained your rationale when I asked you about it (and not used bogus summaries like "not in the linked article" (he was) and accusations of "soapboxing", however that was supposed to apply) I would have agreed then too.
I actually think that nearly ALL in-universe "facts" about fictional characters, along with all episode lists and descriptions, should be off in Wikia. (I also think that Wikipedia should not be a parts catalog, nor a collection of descriptions of not-particularly-notable products.) But we have to work within the Wikipedia we have, not the Wikipedia we want, and the former is mostly determined by day-to-day consensus. I think you'll find that "a discussion last week in the WMF" carries very little weight here unless they result in changes to a policy page or, at least, a guideline page. Jeh (talk) 23:03, 3 September 2014 (UTC)
haha. Dark Liberty (talk) 09:22, 4 September 2014 (UTC)