Talk:InfiniBand

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Technologies based on DisplayPort such as DockPort or Thunderbold, capable of hispeed serial transport, not even mentioned[edit]

Although they are not meant for system bus or internal paths, they include features like e.g. direct memory access and can externally connect devices with such capabilities. They can also encapsulate USB or PCIE transports.

Therefore I think that they should be included in summary sections below the article or paragraphs texts, just to get complex (and unbiased) overview of computer communication links. — Preceding unsigned comment added by Ldx1 (talkcontribs) 19:33, 9 May 2014 (UTC)[reply]

Neutrality of Unified Fabrics for High Performance Computing[edit]

The first two paragraphs of the "Unified Fabrics for High Performance Computing" section sound like blatant advertising. —Preceding unsigned comment added by 75.69.131.15 (talk) 23:56, 4 February 2008 (UTC)[reply]

I would concur. This (second) paragraph in particular is full of ill-defined superlatives, the kind of style that emanates from advertising departments disconnected from technical reality. Nothing more than a flagrant abuse of the free medium of the Wikipedia.

"InfiniBand is an industry standard, advanced interconnect for high performance computing systems and enterprise applications. 

The combination of high bandwidth, low latency and scalability makes InfiniBand the advanced interconnect of choice to power many of the world's biggest and fastest computer systems and enterprise data centers. Using these advanced interconnect solutions, even the most demanding HPC and enterprise grid applications can run on entry-level servers."

What is Infiniband, anyway?[edit]

It seems to me that we need to find a better expression for what Infiniband really is.

Is it:

- a switched fabric communications link (current version of the article, 22 September 2006);

- a switched fabric computer bus architecture (revision as of 15:39, 28 August 2006);

- a point-to-point high-speed switch fabric interconnect architecture (as of 00:21, 22 August 2006); or

- a high-speed serial computer bus (as of 21:12, 19 June 2006)?

I would say Infiniband is not "a bus", but also not "a link". "Arhitecture" is definitely better than "link", IMHO. How about "technology"? "Protocol"? "Network"? "Communication protocol"? "Serial network data transmission technology"? (just brainstorming)

Please comment. Thanks. —Preceding unsigned comment added by Salsus (talkcontribs)

How about a "computer network technology" or "computer network architecture"? -- intgr 18:08, 22 September 2006 (UTC)[reply]

Infiniband is simply an "interconnect fabric", in the same category as HIPPI, Myrinet, Quadrics, ServerNet, SCI (Dolphin), etc. It may be time to create an article (I didn't find one) for the generic "Interconnect Fabric", as an umbrella parent article generically covering all the various computer communications interconnects: from ethernet and Infiniband, to HyperTransport and Quickpath, to SGI's NumaLink, Cray's SeaStar, and IBM's SP switch fabric. Hardwarefreak (talk) 10:49, 17 November 2010 (UTC)[reply]

I think a "a switched fabric communications link architecture" is the best description. It is an architecture, but an architecture that defines a link. Youplaythat (talk) 15:33, 26 July 2012 (UTC)[reply]

Performance Citations[edit]

Someone needs to cite where the performance (specificially througput) results were found.

rivimey: I have included some example latencies for specific devices; there seems to be wide variation here. The devices chosen were simply those found in a google search, and the numbers are those stated by the manufacturer web page. I found one archived email indicating that, as usual, the manuf had overstated things in their opinion.

Host Channel Adapter[edit]

AFAIK, the host channel adapter (HCA) term is unique to InfiniBand (as is target channel adapter (TCA)) so it probably doesn't make sense to have a separate encyclopedia entry for either.

13:59, 10 January 2006 (UTC)

There is no HCA article to merge with at the moment. I'm deleting the "suggested merge" tag. Alvestrand 10:02, 12 January 2006 (UTC)[reply]

Host Channel Adapter and InfiniBand should be merged together at the present time, though it may change in the future.

Timeline?[edit]

Can the orignial author change the sentence in the top paragraph containing: "In the past 12 months (...)"? I have no idea when the article was written, but maybe it should better be something like 'Since 2001, all major vendors are selling (...)". Mipmip 18:32, 29 July 2006 (UTC)[reply]


Copyrighted Content? / Neutrality[edit]

The initial paragraph seems to be copied exactly from "http://www.mellanox.com/company/infiniband.php". In fact, the whole article sounds very much as though it was written by an InfiniBand salesman (apologies if this suspection is wrong). The information about the companies doesn't belong here in my opinion. It also doesn't mention similar competing technologies. I added the "Advert" tag. —Preceding unsigned comment added by 138.246.7.12 (talkcontribs)

If you look down through the history you'll see that various discussions of how InfiniBand relates to other technologies and challenges it faces have been deleted. Got tired of fighting with this one. Ghaff 23:04, 2 August 2006 (UTC)[reply]

Looks like many offending edits have been done here: [1]. The IP address 63.251.237.3 belongs to "Mellanox Technologies, Inc." so they obviously had an agenda. I'll revert these changes now, and I'd be happy to give a hand removing any future POV edits. I can't see any obvious fights (as you put it) in the history, though. -- intgr 08:49, 3 August 2006 (UTC)[reply]

rivimey: I have changed this para significantly: I hope the result is (a) acceptable and (b) correct [I am not an IB expert!). It would be good if someone has the time to register at the Infiniband trade assoc's site to get the spec; this might help with hard facts :-) However, I am not sure how copyright relates to such activity so I haven't done so.

Only the last two paragraphs seem to be an "advert"[edit]

Most of this article is simply a fact-based history of InfiniBand. It seems NPOV, and points out the original broad expectation of InfiniBand and the narrower reality of today's uses of InfiniBand. The comment of the lack of information of InfiniBand competitors seems out of place. Including competitors would risk turning the article into an advertisement or sales discussion. No one has requested mentioning competitors to Fibre Channel in its Wikipedia article. With InfiniBand's limited application primarily to low-latency HPC clustering interconnects, the only competing/alternative technologies today are SCI and Myrinet, and both of those technologies have been largely displaced by InfiniBand. The other alternatives today for server clustering are either higher latency technologies such as Ethernet, or low-latency protocols over Ethernet mediums, which are not widely accepted.

Storage over InfiniBand is a niche area, but important in the InfiniBand discussion. The value proposition for InfiniBand based storage is two-fold: One, higher bandwidths than Fibre Channel (roughly 2X 4Gb Fibre Channel, going to 8X with DDR InfiniBand); and two, simpler networking with InfiniBand clusters, not requiring two different switching mediums and gateways/routers between the two.

The HyperTunnel comments are irrelevant. There is no information at the link provided on HyperTunnel over Infiniband. This is a nascent technology/proposal at best. Newisys Horus, which HyperTunnel over Infiniband is suggested as a competitor, has no major OEMs. The market for a large Opteron SMP server has not emerged yet, and it is premature to suggest HyperTunnel over Infiniband is the answer here. AMD has suggested it will offer greater SMP scalability in future Opteron designs, so both Newisys Horus and HyperTunnel over Infiniband may be irrelevant eventually.

I recommend the part about InfiniBand based storage be rewritten to explain the why rather than the who, and the final paragraph be removed. Regarding competitors and/or alternatives to InfinBand moves the discussion from objective to subjective, and opens the floodgates to NPOV issues. —Preceding unsigned comment added by 68.219.195.185 (talkcontribs)

History[edit]

Nice to see a history section, but it seems a little short on dates. Did the events take place in the late 20th century, or some other time?

I couldn't agree more, technology related articles HAVE to be dated.85.250.148.1 (talk) 12:32, 21 February 2008 (UTC)[reply]

In particular, we need to have individual revisions of standards, features (and compatibility limits) introduced by each one, and the documents which define them, cited by title and date. Also important is milestones (announcement, tapeout, shipment, etc…) for when (if ever) publicly available hardware implemented each one. 66.133.250.190 (talk) 02:48, 24 December 2013 (UTC)[reply]

An InfiniBand Boom?[edit]

It seems like we are on the verge of a boom in the use of InfiniBand, especially in the fast-growing high-performance computing arena. I don't have time to edit the Wikipedia article right now, but here are some references that could be used...

http://www.itjungle.com/tlb/tlb052907-story03.html

http://www.byteandswitch.com/document.asp?doc_id=124888&WT.svl=news2_1

http://www.idc.com/getdoc.jsp?containerId=prUS20701607

Westwind273 18:17, 11 July 2007 (UTC)[reply]

Switched Fabric[edit]

I don't understand how a point-to-point connection can be switched fabric. Please explain it! --213.130.252.119 (talk) 08:10, 23 April 2010 (UTC)[reply]

Point-to-point can simply refer to individual connections after establishment, rather than to the addressing scheme used to create and break connections. See the difference between “permanent” versus “switched” here. 66.133.250.190 (talk) 02:59, 24 December 2013 (UTC)[reply]

Cray XD1/Mellanox misinformation[edit]

The following snippet from the Infiniband article is factually incorrect:

"In another example of InfiniBand use within high performance computing, the Cray XD1 uses built-in Mellanox InfiniBand switches to create a fabric between HyperTransport-connected Opteron-based compute nodes."

The RapidArray router ASIC in the Cray XD1, originally called the OctigaBay 12K, is completely proprietary, originally developed by OctigaBay of Canada before the company was acquired by Cray Inc. The technology in the ASIC has its roots in high performance packet switching ASICs used in the telecom industry, from which both of the key venture partners, the CIO and CEO, had long employment histories.

Neither RapidArray nor the XD1 system contains Infiniband technology, nor Mellanox Technology. In fact the signaling rate of the RapidArray interconnect links is many times that of Infiniband, and the packet latency much lower.

See: http://www.arsc.edu/news/archive/fpga/Tue-1130-Woods.pdf and http://etd.ohiolink.edu/send-pdf.cgi/DESAI%20ASHISH.pdf?ucin1141333144 Hardwarefreak (talk) 10:27, 17 November 2010 (UTC)[reply]

I just deleted the following due to the reasons above. Please do not add it back in. Deleted on 02/01/2012.

"In another example of InfiniBand use within high-performance computing, the Cray XD1 uses built-in Mellanox InfiniBand switches to create a fabric between HyperTransport-connected Opteron-based compute nodes."

I thought I had deleted it before but I can't recall. Please keep this out of the article, as it is not only factually incorrect, but a fabrication. Hardwarefreak (talk) 09:10, 1 February 2012 (UTC)[reply]

I tracked down that this text was added by the user Ghaff (thanks to WikiBlame). I have invited him to this discussion, but I don't know whether he is active on Wikipedia anymore. -- intgr [talk] 10:00, 1 February 2012 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on InfiniBand. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 19:03, 13 November 2017 (UTC)[reply]

Proposed merge with Ethernet over InfiniBand[edit]

The InfiniBand article is not too long that this cannot all be in one article. I don't feel that the topic stands on it's own as notable. Frayæ (Talk/Spjall) 17:01, 3 July 2018 (UTC)[reply]

The author of the Ethernet over InfiniBand article agrees and has merged the EoIB article into the IB page under the titles of Extensions and Implementations. User:MaXintoshPro 19:08, 3 July 2018 (UTC)[reply]

Thanks. GBfan notes "since merged must be retained for attibution also makes a perfectly good redirect to a section", so that's still there is anyone links in. Frayæ (Talk/Spjall) 22:12, 3 July 2018 (UTC)[reply]

Comment about Oracle in 2016 is incorrect[edit]

> 2016: Oracle Corporation manufactures its own InfiniBand interconnect chips and switch units.

The linked article says that Oracle is using Mellanox chips. Greg (talk) 05:01, 23 March 2019 (UTC)[reply]

Speed difference for FDR / FDR-10 ?[edit]

According to https://www.advancedclustering.com/act_kb/infiniband-types-speeds/ FDR-10 is the same as FDR but with 8-10 encoding. Who is right?