Talk:Ethernet/Archive 5

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

History?

The History section indicates that Ethernet was the dominant (over Token-ring and Token-bus) technology by the end of the 1980s (which I believe is true), with the suggested reason being UTP wiring (which I believe is not true). 10baseT wasn't standardized until 1990, and its proprietary predecessors were never all that popular. I suspect that 10base2 was an important part of the transition, allowing small labs to network together without the full expense of 10base5's thick cable. Early 10base2 installations put BNC adapters onto 10base5 transceivers. Seems to me that UTP killed the already losing Token-ring, that might otherwise have stayed around a little longer. (And even with the UTP versions of TR.) Gah4 (talk) 19:54, 28 June 2017 (UTC)

I have added a mention of Thinnet and made other improvements to this passage. I see this as a tipping point too but don't have a citation. ~Kvng (talk) 16:43, 19 May 2018 (UTC)
  • I don't read the current article as saying that. I would agree that by "the end of the 1980s", Ethernet was dominant. But I can't see anything saying that this was due to twisted pair.
It may not even be true. By 1990, there was no "internet to the desk" and PC networking was still very much the exception. There was though a huge amount of mini- or mainframe-backed systems in the markets that did have extensive networking. As IBM had such a large share of those, then the size of Token Ring shouldn't be underestimated. When cheap desktop PCs were networked en masse, this was done with Thin ethernet, later 10baseT, and that's when Token Ring started to lose its share of the growing market (although not its established userbase). Token Ring took a long time to die though. I remember tales of buildings, especially 1980s high-tech buildings like Lloyds of London, where vast amounts of cable had been pulled to support the designed analogue phone circuits, but the wiring closets were choking with the new (and infamously inflexible) Token Ring.
In the late 1980s I was selling networking, almost all Thin ethernet. Although I had such memorable events as once abseiling down a lift shaft installing a vertical run of Thick. My new "head torch" was a great help in picking the whiskers out of the vampires! Note that TCP/IP was not yet far from common, except in the Unix world. MS-DOS was mostly using IPX/SPX Netware or some even more obscure proprietary thing. I did a lot of reviewing for the (well-heeled) London computer press around then and there were lots of peer-to-peer small office network protocols around that did a lot better job, within their limits, of networking a few desktop PCs, and with less resource hogging. A couple (Sage MainLAN being memorable) went their own way with hardware. Sage managed to adopt the worst of both worlds on the physical hardware, combining Token Ring cable (if you wanted to go further than a desk) with the utterly unsuitable DB-9 connector. By 1990 though, the writing was on the wall for these and the market was buying weird proprietary implementations which might talk NetBIOS, but over standard Thin ether over the ubiquitous and reliably trusted NE1000 or 3c509 cards.
In early 1990 I set off for a course at 3Com (I was a "3Wizard") on this new-fangled 10baseT stuff. Oh how we laughed at the crazy idea of replacing our lovely space-efficient bus topology with star-wiring... Of course, history went differently. I still have a (very dusty) box of my old Thin ethernet installing supplies and it has barely been opened since about 1992. So 10baseT did come to dominate by 1993, still mostly Netware and heavily client/server biased, which still pre-dates "internet to the desktop" and the death of Token Ring. But 10baseT certainly wasn't the driving force of Ethernet's adoption.
By 1995 [sic] Windows had its own networking, with the introduction of Windows 3.11 but this didn't move from being a party trick and a nightmare of fighting memory shortages until the 32 bit [sic] Windows NT and Windows 95 to act as servers or peers. Then the Internet came along, we saw sense and (via Trumpet Winsock and PPP) even Windows got with the TCP/IP program. Andy Dingley (talk) 17:49, 19 May 2018 (UTC)
I was trying to work within the existing citation (Metcalf video) in my revision. He doesn't actually mention Thinnnet or UTP (or 10BASE5) but mentions adaptability and dominance by the end of the 1980s and I extrapolated from there based on my own experience from late 1980s and early 1990s and comments from Gah4. It would be good if someone could come up with an independent secondary source describing how Ethernet won. ~Kvng (talk) 13:48, 22 May 2018 (UTC)
Ethernet to the desk was fairly common in academic environments by 1990. I installed research lab sized 10base5 and 10base2 networks in 1990. This was just before 10baseT, when non-standard devices were available, but not quite ready enough to commit to it. I still have my vampire tap tool around somewhere. One that I still remember from those days, to save money we bought two-port transceivers. Turns out that it you put a cable in both ports, you need to terminate unused ends at 82 ohms. It took a little while to figure out why some ports wouldn't work sometimes. Later, they were shipped with the terminators. (On the other hand, the four port version didn't have this problem.) Token-ring might have been more popular in commercial environments. My favorite books are the Rich Seifert books, Gigabit Ethernet[1] and The Switch Book[2] which I believe have some discussion on ethernet winning over others. Rich Seifert has one author of the original DIX ethernet. Gah4 (talk) 20:40, 22 May 2018 (UTC)


References

  1. ^ Seifert, Rich (May 1, 1998). Gigabit Ethernet. Addison-Wesley Professional. ISBN 978-0201185539. {{cite book}}: |access-date= requires |url= (help)
  2. ^ Seifert, Rich (June 27, 2000). The Switch Book. Wiley. ISBN 978-0471345862. {{cite book}}: |access-date= requires |url= (help)

protocol (not the movie)

There seems to be a question about Ethernet being, or not being, a protocol. Seems to me that MAC is considered a protocol, but PHY isn't. Does that help? Gah4 (talk) 13:22, 15 August 2018 (UTC)

I don't see anything wrong with Ethernet being called a "family of (link layer and physical layer) protocols". A protocol is a set of specifications and conventions that several entities obey. There are no network layer protocols here, but it's a bunch of network protocols for sure. Why shouldn't "protocol" apply to the link layer or physical layer? --Zac67 (talk) 17:20, 15 August 2018 (UTC)
I suppose even for PHY, but that is less obvious. At some point, something is too simple, so it isn't really a protocol anymore. But I believe that link layer (MAC) isn't too simple, and so deserves to be a protocol. MAC has gotten less simple in later Ethernet standards, too. Gah4 (talk) 18:39, 15 August 2018 (UTC)
I wouldn't call something like 10GBASE-T "simple" though... ;-) --Zac67 (talk) 19:56, 15 August 2018 (UTC)
I think I wouldn't either. But in some cases, one can simplify something that otherwise isn't so simple. Sometimes a (possibly large) look-up table will make something very simple out of something complicated, once the table is made. Consider to TCP, with timers related to retransmission and fragment assembly, and the unpredictable timing of incoming frames. Compared to that, the PHY of 10baseT is pretty simple. But even the collision detect of the MAC for 10baseT has timers and such to get right, so seems fair as a protocol. Even more, for later ones. And you only need one. Gah4 (talk) 23:28, 15 August 2018 (UTC)
Yes, I reverted this contribution. For a definition of what's meant by protocol in this context, I'm led to Communication protocol which does not fully encapsulate the concept; We run communications protocols on Ethernet but the electrical infrastructure is not itself a communication protocol. Others commenting here seem to be using a broader definition of protocol and that's fine except it creates unnecessary ambiguity with communication protocol and so why not avoid that and continue to call it a family of technologies. ~Kvng (talk) 13:33, 18 August 2018 (UTC)
I'm OK with that, but Ethernet's layer 2 is less a technology and more a set of protocols (or a single protocol with several options). --Zac67 (talk) 14:01, 18 August 2018 (UTC)–
Well, Communication protocol actually says The IEEE handles wired and wireless networking for examples of protocols. One might argue about the PHY, layer 1, though I think that electrical infrastructure is oversimplifying it. But the MAC, layer 2, handles carrier detect, collision detect, exponential backoff, and retransmission, and some other details that I won't mention. Those are easily enough for an actual protocol. Gah4 (talk) 06:19, 19 August 2018 (UTC)
Carrier detect, collision detect, CSMA/CD are layer-1 functions in Ethernet (and generally, except for CSMA/CA moved to layer 2 in 802.11). --Zac67 (talk) 07:16, 19 August 2018 (UTC)
I can't tell if there is any remaining disagreement here. Is Ethernet best described as "network protocol family" or "family of computer network technologies"? I have trouble understanding exactly what the first means and whether it is correct. While the second is a broader description, I'm pretty confident it is not wrong. ~Kvng (talk) 23:36, 20 August 2018 (UTC)

each ethernet station

There is discussion about one MAC address for each ethernet station, but I don't know what a station is. As well as I know, there are two ways to do it. One, most commonly used, assigns an address to each interface. They are commonly in ROM (PROM, EPROM, EEPROM, etc.) on the NIC. The second, used at least by SUN systems, assigns one MAC address to each CPU. (Specifically, there is a PROM or EEPROM on the CPU board.) In this case, all interfaces connected to that CPU all have the same MAC address. The only possible complication, is that one might connect two interfaces, with different IP addresses, to the same ethernet. I don't know if this matters for the recent edits. Gah4 (talk) 05:06, 15 September 2018 (UTC)

Generally, there's one MAC address for each NIC or router interface, physical or virtual. The "one MAC address for each Ethernet station" (station = server or client) paradigm was coined when it was hardly imaginable to connect a server with multiple interfaces to the same network. Essentially, an individual MAC address is required for each interface to be addressed individually in the data link layer. The Sun scheme you describe causes a serious problem when a server is supposed to have multiple layer-3 links to the same network. --Zac67 (talk) 07:41, 15 September 2018 (UTC)
Yes. As I understand it, though, both are allowed in 802.3. I did once, in thick ethernet days with HP-UX, put two on the same net, as we had to connect two nets for some reason. But now, most systems know how to put more than one IP address on one NIC, so that shouldn't be needed. But the actual reason for this section, is that I didn't know what was meant by 'each ethernet station'. Gah4 (talk) 07:54, 15 September 2018 (UTC)
The 802 "IEEE Standard for Local and Metropolitan Area Networks: Overview and Architecture" defines an end station as "A functional unit in an IEEE 802 network that acts as a source of, and/or destination for, link layer data traffic carried on the network." So, wherever you initiate or terminate layer-2 connections you've got a "station". Of course, this isn't limited to Ethernet, there are many other networks using MACs. --Zac67 (talk) 08:15, 15 September 2018 (UTC)
OK, but even written that way, and in the case of a multi-homed host, is each interface its own end station, or is it the whole system? In the usual case, of either DMA or with local memory, a NIC, once started, runs independent of the host CPU. Often the NIC has its own CPU, enough to be its own end station. Gah4 (talk) 11:11, 15 September 2018 (UTC)
There is a story (maybe I can find a WP:RS) that at a networking conference some years ago, one talk about Token Ring ended saying something like "If you find an ethernet interface for less than $1000, buy it". The following talk was 3COM announcing the 3C501 for $999. When $999 was a low price for a NIC, no-one would think about putting two on the same net, except possibly for temporary or testing use. It made some sense to allow for the one address per host, as Sun did it. Gah4 (talk) 20:42, 15 September 2018 (UTC)


Then there is DECnet, which maps DECnet addresses to specific, local administered, MAC addresses. Gah4 (talk) 20:42, 15 September 2018 (UTC)

As per the 802 definition above, a "station" is any entity terminating a layer-2 connection – it requires a MAC address in order to pass frames to it. So yes, each NIC is a "station". As I tried to point out above this is more a traditional term than one that makes a lot of sense today. (And yes, this is a kind of recursive definition...) --Zac67 (talk) 14:54, 15 September 2018 (UTC)
This topic is a bit muddled here on WP. We already have Node (networking), Communication endpoint, Data terminal equipment, End system, Terminal (telecommunication), Host (network), Network interface, Port (computer networking), Computer port (hardware) and Network interface controller. Probably not a good idea to create Ethernet station. Any suggestions on taming all this? ~Kvng (talk) 14:13, 16 November 2018 (UTC)
Currently, Node (networking) only somewhat distinguishes between layers. IEEE 802-2014 Clause 3.1 defines "end station: A functional unit in an IEEE 802 network that acts as a source of, and/or destination for, link layer data traffic carried on the network." --Zac67 (talk) 17:37, 16 November 2018 (UTC)
We don't have an End station article either. Even if we did, I beleive it is common to use end station in a more generic networking manner than the IEEE definition you cite. ~Kvng (talk) 14:00, 19 November 2018 (UTC)
Generally, "station" means node, most often a layer-2 node. Maybe we should add a definition for "Ethernet station" here (so the IEEE def would make sense) or avoid that term altogether. --Zac67 (talk) 14:07, 19 November 2018 (UTC)
"Station" is used about a dozen times in two different sections. "Node" is used about half as much. I don't see a good reason for using both. Node is much more common elsewhere on WP. ~Kvng (talk) 15:27, 22 November 2018 (UTC)
It still seems that there need to be names for devices with more than one ethernet interface, separate from the name(s) for the interface itself. With the analogy of a train station with more than one track and platform, a computer station with more than one ethernet interface seems right. Node sounds more like an individual connection to a network, though the analogy makes slightly more sense in the coaxial ethernet case. In the case of a one interface device, there is no need for the distinction. Gah4 (talk) 21:43, 22 November 2018 (UTC)
In the context of your post that started this discussion, I think the term you're looking for is Host (network). ~Kvng (talk) 15:04, 25 November 2018 (UTC)

Ethernet/ATM

A sentence about Ethernet replacing ATM in WANs was removed from the article recently. I marked it as "citation needed" a few hours before it was removed. However, I disagree with the reason for its removal. In the changelog, the reason for removal was given as "Remove ATM comment, as ATM is meant as a WAN protocol, and shouldn't compete with ethernet." However, Ethernet is a layer 1/2 protocol and it doesn't matter which type of network it runs on (LAN, WAN, or MAN), as it only serves as the underlying link between devices. I was wondering if anyone had any reliable sources that state this, as it might be an important tidbit if it can be supported. If not, I'm fine with the removal of this sentence. --Eric112358 (talk) 05:03, 29 January 2017 (UTC)

Is ATM still in use, I doubt.--Kgfleischmann (talk) 08:09, 29 January 2017 (UTC)
Well, my edit to this page will be sent to the Wikipedia servers over at least one ATM connection, so it's still being used in some places. However, it's probably being used in fewer and fewer places over time.
ATM was being promoted as a LAN technology, but I'm not sure that really went anywhere; Ethernet pretty much crushed its wired-LAN competitors, including ATM. The places where there's a real competition are probably the "first mile" (access networks), where you have ATM-based ADSL/ADSL2/ADSL2+ (I don't know what's used for VDSL), MPEG-2-based DOCSIS, and Ethernet in the first mile, and backbone WANs, where the competitors are presumably ATM, PPP, and Ethernet. The latter is probably what people think of as "ATM"; if somebody has a reliable source indicating whether there's a trend away from ATM towards Ethernet, that'd be useful. Guy Harris (talk) 08:30, 29 January 2017 (UTC)
Yes, I removed it and am happy to discuss it here. Technically, there is 10broad36, a form of ethernet meant for MAN use. DOCSIS frames are similar to Ethernet layer 2 frames, but the layer 1 is not Ethernet. More specifically, it isn't part of 802.3. 802.11 also uses similar layer 2 frames, but isn't 802.3 and isn't Ethernet. Fiber Ethernet distance are getting longer, and could compete with some other MAN and WAN systems. ATM could be used for LANs, but I suspect that only makes sense when connecting through to an ATM WAN. (Though that likely went away with 100baseT.) In any case, with a reliable source, I am not against a new statement showing that they are in competition, and comparing them. Gah4 (talk) 11:00, 29 January 2017 (UTC)
The removed statement was "Ethernet has replaced the ATM circuit-switched technology first developed in the early 1990s." There was no WAN qualification. I think it is fair to say that ATM has been replaced but saying that it was replaced by Ethernet is an oversimplification or overstatement. I'm not in favor of restoring this statement. I think that it would be good to mention in this section new Ethernet WAN and MAN applications such as 10BROAD36 and the connections to DOCSIS and MPLS. ~Kvng (talk) 15:38, 7 February 2017 (UTC)
I added the WAN qualification. As far as I know, and I could be wrong, ATM never had a chance as a LAN technology, except possibly as an entry to a WAN. That is, someone might put in an ATM switch between some local servers and an outside WAN link. ATM is a lot more complex (and so more expensive) than competing LAN technology, even more than the other LAN systems that ethernet killed, such as FDDI. 155Mb/s (minus overhead) was fast in the 10 Mb/s ethernet days, but not so much faster than FDDI or 100Mb/s ethernet. ATM might have enough advantage as a WAN technology to make up for the complexity and cost. It makes sense to me to discuss ATM vs Ethernet as WAN technologies, and I wouldn't be against any such additions to the article. Gah4 (talk) 16:50, 7 February 2017 (UTC)
ATM proponents had high aspirations. ATM was going to solve all networking problems from the desktop to the backbone to the WAN. Clearly this was not realistic. ~Kvng (talk) 14:04, 10 February 2017 (UTC)
About 20 years ago, I knew someone with an ATM switch. (I didn't stay around to see it actually work.) As I understand it, they needed access for large volumes of data to a far away lab, when the Internet was not yet up to the task. Remember when getting 100KB/s across the room was considered fast? Also, the project had the budget for the hardware and the link. I am not sure what proponents might have known, but the solution seems to be a much faster Internet. There are still some uses for private WAN links, but not so many as one might have thought 20 years ago. Gah4 (talk) 15:25, 10 February 2017 (UTC)
FWIW the whole of the UK is covered with ATM. It's almost completely invisible though to the users though. The authorities decided to layer most of the UK's entire Internet on top of ATM. It wasn't a totally bad decision to layer it like that because it meant that the UK didn't end up with a bunch of monopolies nearly so badly as America, you can switch ISPs at the drop of a hat, but that ATM is looking quaint now. GliderMaven (talk) 05:48, 7 February 2019 (UTC)

total length

For signal degradation and timing reasons, coaxial Ethernet segments have a restricted size. Somewhat larger networks can be built by using an Ethernet repeater. Early repeaters had only two ports, allowing, at most, a doubling of network size. In the early days, thick coaxial cables of up to 500m and FOIRL links were used. Within the timing limits you can chain them, allowing for somewhat bigger than doubling the 500m size. Does size mean physical size or the number of taps (hosts)? In the latter case, you can connect many repeaters between a backbone cable and branch cables, for a really large number of hosts. Gah4 (talk) 18:49, 5 October 2016 (UTC)

This is a network diameter restriction required for collision detection to work reliably. Network diameter is the longest cable distance between any two nodes on the network. ~Kvng (talk) 16:03, 25 March 2020 (UTC)
The limit is for two nodes on a segment (OSI Layer 1). The network can be bigger than that, but the links need to be switches or bridges, not merely a packet-level repeater. Andy Dingley (talk) 17:11, 25 March 2020 (UTC)
Just to hop out of the weeds here, the limit is on physical size. Certainly there is some practical limit, but I'm not aware of a design-rule limit on the total number of nodes for this type of Ethernet. ~Kvng (talk) 17:02, 28 March 2020 (UTC)
There are limits on the number of taps on the coaxial segments. I did once see an 8 port (BNC) repeater, but not more than that. With 24 port repeaters, you could get pretty many on coaxial cables, though watch out for knots. Gah4 (talk) 18:42, 28 March 2020 (UTC)
  • There's no hard limit on the number of taps, it's about the total losses. As some types of tap were lossier than others, that varied.
Around 1990, I was working for a well-known connector manufacturer. Like many, they had invented a no-break 10base2 connector, with two BNCs and a flat pluggable tongue in the middle. Mechanically, it was excellent. They were a connector maker who knew all about reliable contacts. Seems they knew less about RF design though. Turns out these things had about six times the insertion loss of a normal T piece. They worked fine across one desk, but use more than a handful of them on a segment and it stopped working. Although instructed to "eat our own dogfood", I had to quietly lose all of them one night to make the network reliable again.
The size limit is a physical size limit, based on timing and signal propagation time down RG-58.
There is, AFAIR (the blue DIX pamphlet), a hard limit on number of nodes too, but it's enormous and far higher than any of the other practical limits would permit. Andy Dingley (talk) 19:56, 28 March 2020 (UTC)
"There is, AFAIR (the blue DIX pamphlet), a hard limit on number of nodes too" Section I, "Introduction", of the 2.0 DIX spec says, under "Physical Layer", "Maximum number of stations: 1024". The only other 1024 that Preview (macOS) could find in the document is in the backoff algorithm, with 1024 being the maximum backoff; I don't know what motivated the 1024-station maximum. Guy Harris (talk) 20:52, 28 March 2020 (UTC)
The backoff limit is statistical. Well, first, it only applies to hosts actually transmitting. In backoff, each host picks a random number, in the later stages between 0 and 1023. If only one host has the lowest number, they transmit, else there is a collision and everyone tries again. More hosts trying means more likely that more than one get the same lowest number, but nothing magic happens at 1024. Especially as for most nets, not everyone is active at the same time. (Unless everyone is doing online work due to Covid-19 rules.) Gah4 (talk) 21:53, 28 March 2020 (UTC)

GA status

I recently completed addressing all complaints from a failed 2011 GA review. I submitted for a new GA review which prompted Chiswick Chap to insert 19 {{cn}} requests. I have withdrawn the request for GA review. ~Kvng (talk) 13:48, 20 April 2020 (UTC)

Kvng - I'm not sure if you missed it by accident, but removing the nomination from Wikipedia:Good article nominations isn't necessary, you just need to remove the GA template at the top. Best Wishes, Lee Vilenski (talkcontribs) 13:58, 20 April 2020 (UTC)
 Done ~Kvng (talk) 14:03, 20 April 2020 (UTC)