For most of the last two weeks—well, last month and a half, really—I’ve been elbow-deep in a series of computers attempting to fix, then rebuild, then build, my desktop PC. I like working on computer hardware; dad taught me how to build a computer over a decade ago, so I’ve scratch-built just about every desktop I’ve used since, and if it wasn’t for the horrible expense I’d do it more often.
I have fond memories of building my first two computers, and my dad’s, which are almost sepia-toned nostalgia at this point. While I got most of the parts from a local specialty store—which offered great service and cut-rate prices—I remember going into now-defunct places like CompUSA and Circuit City to pick up odds and ends. The stores were nearby, and there’s only so much difference in PCI adapters or 56k modems or 80mm case fans. Now, unless you’re buying some piece of crap off the Best Buy showroom floor, the go-to places for computer hardware are online: NewEgg and Tiger Direct.
Less fond are my memories of building computers for friends. Well, Keving’s was kind of fun, because he bought pizza, and we got to mock him for trying to use the case’s feet pegs to hold the motherboard on. My ex-roommate’s computer was an object lesson in futility; we outlined a solid list of components, and he decided to cut costs at the last minute. (When he said “a $500-700 computer,” he’d meant apparently meant $525 and not a penny more.) Hilarious hijinks ensued. When I’d suggested a case, I’d double-checked to make sure the motherboard would fit. When he downgraded both case and motherboard, one was too big and the other was too small. (Guess which was which!) Also, to quoth Rich as he wildly waved the case’s side around, making a sound much like an aluminum foil rainstorm, “I’ve wrapped meat in stronger material than this!”
The lessons learned included “Always fill out an RMA and mail things back within days, not weeks,” and “Never buy an open-box motherboard on the off chance some dipshit failed to overclock it correctly and burned everything out, no matter how much cheaper it is to buy an open-box AssRock.” And unlike Keving, he didn’t even pick up some pizzas.
So file this post under “man, times have changed”/”kids these days.”
Traditionally, I’m an AMD fanboy. My first gaming computer had an Athalon CPU, because it blew the Pentium III out of the water performance-wise and cost $110 instead of $199. My long-haul gaming rig had one of the first AMD 64 CPUs, which was an early FX chip—the line made for gaming enthusiasts as a Pentium 4 EE killer—at that. When I built desktops for my dad, a friend, and my roommate… yep, all solid multi-core AMD 64’s.
So it should be pretty damning when I went with Intel over AMD. Intel’s had a solid winning streak for a while now, with their Core i3/i5/i7 chips offering solid performance, particularly at the high end, for not a whole lot of money. AMD’s Phenom IIs were (more or less) comparable in both price and performance, keeping the balance of power if not AMD’s original performance advantage. But somewhere it all went wrong.
AMD’s next-gen chipset, Bulldozer, looked to be hot stuff: an eight-core, 3.6 Ghz processor with multi-threading that broke the Guinness world record for overclocking (an astounding 8.4 Ghz!). But its actual performance is disappointing and unpredictable. Unless you’re overclocking or doing high-end video or software rendering, performance will fall behind the higher-end Phenom IIs. Intel’s capable processors from last year, the i5-2500k and i7-2600k, continue to offer the best price:performance ratio, especially since they overclock like pros.
AMD has a strong road-map for the architecture, hoping to improve performance by 10-15% with each successive yearly release. If that’s true, their next release, Piledriver, might actually surpass the Phenom II and Intel’s i5 and i7 CPUs…. processors from last year which sell for a good chunk of change less than Bulldozer. Bulldozer has a lot of potential in its multi-threading, as proven by its solid performance for video work and rendering, but Piledriver has a lot of catching up to do to make the architecture competitive for gamers.
When I bought my two Terrabyte external hard drive a year ago, it cost a measly $85. Internal drives were dirt cheap: a one Terrabyte SATA HDD cost around $50 for example, and I’m talking about 7200 RPM drives from Western Digital. When I picked up 160 GB internal hard drive now, it cost around $80.
There’s a lot of complaints about price-fixing and gouging, but there’s a a more realistic reason. The 2011 monsoon season wracked Thailand, including its industrial parks, which would be where most hard drive manufacturers and component suppliers have work sites. And with the second-largest producer of hard drives out of commission, prices spiked. In under a month, prices for hard drives doubled to tripled, and they’ve yet to climb back down. Unless your need is critical, give it some time before picking up another one so that the price can stabilize closer to reasonable.
It’s also worth mentioning the advantages of SSDs (solid-state drives), which are much faster and efficient, if expensive. The lack the mechanical moving parts of a hard drive, thus don’t need to spin anything up to write or read, and store info in microchips like USB flashdrives… only much, much faster. Some newer ASUS z68 motherboards, particularly with the Intel Core Smart Response Technology, can use SSDs as a cache for information stored on HDDs for a performance enhancer.
Maybe it’s because I bought so much RAM during the price-fixing years, when 2 GB of DDR1 cost just shy of $200, but memory prices today are more than dirt cheap. They’re just giving memory away. A solid set of 8 GB RAM from someone like G.Skill or Corsair costs around $50, which feels ludicrously low; at this point, there’s absolutely no reason why you shouldn’t pick up more if you have the slots free.
But only if you’re running a 64-bit OS: 32-bit operating systems can only recognize 4 GB at most, and for earlier versions of Windows XP, only 3 GB. It’s also worth noting that I’m talking price for DDR3, which is incompatible with older machines (and visa versa; if you haven’t upgraded, most likely you’ll have to pay an arm and a leg for DDR1, or somewhat less for DDR2).
Lastly, as an example of why not to buy parts from big-box retailers… I made the mistake of glancing into Best Buy, where 4 GB of their shitty PNY memory costs $54. Compared to what I got off NewEgg, half as much for slightly more. They did have mad deals on case fans and canned air, however.
Call it a video card or GPU, it’s the thing that lets your computer interact with your monitor (unless you have onboard video) and powers your gaming experience. Much like with CPUs, things have stuck with two major supplies, in this case nVidia and ATI.
AMD bought out ATI a few years back, and redesigned their line of Radeon cards to offer more performance bang for the buck. The HD 4850 and 4870 were solid cards offering performance equal to nVidia’s contemporary 8800 and 9800, only for a lot less money. This was followed up with their 5850 and 5870 cards, high-end performance killers.
Sadly, ATI hasn’t been able to capitalize on those gains; while they’re still seen as the more cost-effective choice for low-end and midrange PCs, especially with two cards installed using CrossFire, nVidia remains the reigning champ in terms of high-end performance and popularity. ATI cards continue to have weird bugs and glitches, which doesn’t help things. While it mostly boils down to specific price-points and personal brand feelings, it’s hard to argue against nVidia’s market dominance.
A quick glance at the Steam hardware survey shows that around 60% of PC users go for nVidia cards. Though, interestingly enough, the aforementioned ATI 48XX and 58XX series are all in the top twelve. While my 4850 is a great choice, if it died tomorrow I’d probably jump ship back to nVidia and get a new GeForce GTX.
Here’s one for you.
If you’ve ever done a lot with computers, either in terms of upgrading new hardware, installing/upgrading an OS, or doing basic maintenance, you’ve probably seen the BIOS screen more than a few times in your life. It hasn’t changed much since the days of DOS: basic ASCII graphics, simple keyboard controls, overwhelming blue screen with all the input needed to change the boot device or configure power management.
That’s all a thing of the past, now. The last decade saw the development of the UEFI, or Unified Extensible Firmware Interface; after its antecedent, EFI, appeared on Macs and a few oddball Windows machines, UEFI is set to become a common feature of PCs replacing BIOS on the motherboard itself.
The new UEFI technology is an OS-firmware interface, which boils down in non-tech talk to mean “an interface to manage PC boot and runtime services.” You know, the same things BIOS always handles. What’s changed, though, is a lot.
First off, it’s a graphical interface, so it looks like something we should be using in an age of 64-bit computing. Amongst other things, it has realtime information such as temperature, processor and fan speeds, memory size, and voltages—if you’re doing repairs/upgrades, things you might like to see in the first ten seconds of booting. Aesthetics aside, it also handles mouse support, probably its most useful feature. There’s also quick and easy boot device priority sections—click and drag around your boot devices—along with automated performance tweaks. And that doesn’t even delve into the advanced menu, which handles everything from automated overclocking to automated BIOS flashing (firmware updates).
Minor changes, compared to advances in memory, storage, or processors, but helpful and welcome ones at that… it’s simplistic, easy to use, and looks like something designed in this century. Probably its biggest advancement is increasing the limitations of hard drive partition sizes to the futuristic-sounding 8 Zebibytes; BIOS had a limit of 2.2 Terrabytes for a Master Boot Record partition, which means your future machine with a 3 Terrabyte hard drive can only be recognized with a UEFI.