Colin B
Verified Member

Posts: 21
|
(I hate doing this but: 15 years professional linux system administration and five years hobbiest of the same before that)
I'm strongly in the "can, shouldn't" camp. Theoretically if your cable is so bad that the two computers have to re-transmit the data so often that you get a buffer underrun then yes, it'll have an impact. But assuming you are using a cable that can reliably handle the demands of basic home networking there should be literally zero impact on the end product. So the short of it is: don't use super garbage cables, don't have insanely long runs of unshielded cable (20+ meters), and don't do something stupid like wrap them around house power.
If you're not interested in technical commentary about TCP/IP and why high-cost ethernet doesn't make any sense, stop here.
Still here? Cool, lets rock! I'm probably going to end up being terminology heavy and definition light, so feel free to ask for clarification.
While the author is correct that the transport medium (the cable) caries a pulse-modulated analog signal, the timing and error correction data built into each packet at the protocol level means that a busted packet will be dropped and resent. In essence, there are only two possibilities: the packet arrives unmolested, the packet is thrown out and retransmitted. Both of these events are entirely transparent to an end-user application which sees an unbroken stream of data. Again, and this is super important, IP is a datagram-based protocol which sends data in discrete chunks, TCP is a connection protocol sitting on top of IP doing quality control and managing successful delivery, and your operating system handles pulling the data out of the packets and reassembling them into a correctly ordered stream. Because of this layering, your application sees a continuous stream of data that by definition must look identical to the equivalent bytesteam generated and consumed locally.
Before we go any further I have a brief aside: all of this only applies to TCP/IP based transmissions. If you use a protocol like UDP (also over IP) or IGMP (another internet layer protocol like IP) you do not get any retransmission guarantees, ordered reassembly guarantees, or any of that. Generally speaking though, point-to-point transmissions (like streaming music from your media server to your playback computer) don't need the speed benefits of UDP, and don't need the multicast functionality of IGMP. As an aside to the aside, the reason internet TV occasionally has graphics glitches is because internet TV uses IGMP as a transport and if a busted packet arrives there's nothing the protocol can do.
So now that we've established that TCP provides delivery guarantees (regardless of the quality of the physical link that it is going over) it should be obvious that a packet switched TCP network (a category that includes every home network) is incapable of having data breakage in a way that degrades playback quality. This isn't to say that it can't have data breakage that makes things impossible to listen to, but that would be because the underlying network was hostile to all forms of data transfer, most likely making the connection unusable. So, bringing things back around to the beginning, as long as your cable runs are good enough to push the bandwidth needed to copy the data stream faster than your playback (roughly 1 MB/s for 192 khz 24 bit uncompressed audio, basically the peak of what anyone should be shoving across the wire) plus whatever other networking demands you have there will be zero difference between $1/foot and $100/foot cables as the protocol literally makes that impossible.
Questions? Comments?
EDIT: one note here, I'm talking about home networking situations where your media lives on one computer, your playback system is on another, and you're streaming either the file or the decoded PCM data across the wire (or DSD I guess, for the six of you with DSD media collections). I am not talking about anything using a multicast or UDP based broadcast system like internet radio does. Mostly because anything without delivery guarantees going across the internet is guaranteed to suck, and no amount of good-vs-amazing cable will make up for that (remember, TCP/IP was designed around delivery guarantees in unreliable environments).
EDIT THE SECOND: Doing an objective test is easy: copy a file across your network from your file server to your playback system and then run a file hasher on them both (for mac/linux/bsd systems I suggest the command line sha1/sha1sum, for windows I'm sure you can find something) - the hashes should be identical. The subjective test is basically the same: play back the file over the network, then play back the file from the local copy - they should sound identical. If they hash to different values your network sucks, if they sound different but hash the same you're most likely imagining things as the files are essentially identical. (Yes I'm somewhat taking the piss here, but the whole thing is silly to folks with a compsci background)
|