Author Archives: admiral.ironbombs
Words really cannot do justice to this game; it must be seen to be understood. Its trailer captures and relates the spirit of the game:
It is the future! The year is 2007. The apocalypse has had an apocalypse. Remember those old trashy action movies that used to come on after dark on cable, that you taped and watched endlessly as a kid? This game is one of those movies. As the designers call it, Blood Dragon is an ’80s VHS vision of the future, a flashback to what action movies in the Reagan years depicted the future—err, our recent past. It’s overwrought, overflowing with ultraviolence, spewing forth one-liners and bad puns in every direction.
I think I’m in love.
Instead of loading screens, there are “tracking” screens; the draw-distance doesn’t have obscuring haze or fog, it has VHS scan lines. The in-game movies are straight out of the 16-bit console era. Protagonist Rex Powercolt (a gravelly voiced Michael Biehn) is influenced by Universal Soldier, with more than a little G.I. Joe thrown in; his nemesis Colonel Sloan wears the same kind of chainmail vest as the bad guy in Commando. The weapons are homages to such classics as Terminator 2 (shotgun), Predator (minigun), and Robocop (pistol, sniper rifle). The jeeps are straight out of every bad action movie ever, right down to their horrible off-road handling. And let’s not forget the eponymous blood dragons, giant neon lizards that shoot freakin’ laser beams from their eyes.
Actually, just about everything here is doused in neon—the bow glows blue neon, the hordes of cyber-goons glow red neon, the scenery is a sugar rush of colors. Compared to most shooters, which are drenched in a range of bland from “earth-tones” to “shit-covered,” Blood Dragon is almost seizure-inducing. It’s also got an impressive retro soundtrack from Australian music duo Power Glove that’s been on heavy rotation on my iPod since it released.
The gameplay is radically different from the other Far Crys; for the first time, I felt like a true badass in a video game. You can run faster than those crappy jeeps, you have stealthy takedowns for enemies, and while your level-up path is linear, you get plenty of excellent upgrades. The blend of action and stealth is excellent, and it’s equally possible to run in guns blazing, or sneak around killing enemies with your bow and stealth takedowns. Though, the game takes a perverse pleasure in having those stealth scenarios go awry, and there’s a chaotic exuberance to fucking up at being stealthy and getting into a prolonged firefight where reinforcements (and blood dragons) are called in.
At first, I thought of another major franchise combining comedy and violence, the Saints Row series. The two are very different flavors. Saints Row has increasingly been approaching comedy like a baseball-bat-sized floppy purple dildo to the face—an over-the-top assault of crazy. Blood Dragon is much more tongue in cheek, only where the tongue is protruding through the gaping hole in said cheek, and more than occasionally turns into a leering grimace. Some of its lines are drop-dead hilarious; others, like Rex arguing with his internal A.I. over tutorials, and some of the oft-recycled one-liners when he kills a cyber-goon, can come off as grimly glib, even forced. After all, this is the game where you rip out the enemy’s glowing blue cyber-hearts and use them to lure neon dinosaurs around the map.
About halfway through the game I realized that, underneath its candy-colored shell and assortment of ’80s references, this is still a Far Cry game, and living in a post-Far Cry 2 world means a lot of cycling enemy patrols, assaulting cookie-cutter bases, doing three types of similar side-quest to unlock upgrades, and finding hidden “secrets” scattered across the map. I’m not a huge fan of repetitive grind, especially when it feels like filler to pad a short single-player campaign; thankfully Blood Dragon ended up being about the right length. If you only completed the story missions, you could probably complete it in 5-6 hours. Giving in to my OCD-ness and completing all the content, along with a bit of wandering around for shits and giggles, took me a grand total of 11 hours.
About the only thing I can complain about—besides my dislike for the series’ devolution into repetitive gameplay—is that it requires Uplay, Ubisoft’s new proprietary game hub. (“But everyone else was doing it!”) While it’s no Steam, and it’s frustrating to boot into yet another platform from Steam, at least Uplay is leaps and bounds ahead of Origin. It doesn’t transfer achievements (which is why Steam doesn’t have any), but it does have an interesting system where getting achievements and accomplishments gives you Uplay points, to unlock content like wallpapers and music files. (Yippie.)
What really impresses me is that someone at Ubi came up with a risky idea, and then management let them run with it—using one of their cornerstone triple-A brands, to boot. I applaud them for taking the risks; some of the game elements feel like baby steps, like the devs could have expanded on an idea or pushed a boundary, but for the price I got more than I expected. (MSRP is $15, I picked it up during the Steam summer sale for $8.99. You can get the digital soundtrack for $7.99 at Amazon.) Also, it’s worth pointing out that the game is a standalone DLC title and doesn’t require Far Cry 3, despite its branding.
Commentators have wondered if the same nostalgic magic will work for other Ubisoft properties; “Watch Dogs: Pound Puppies edition” was one suggestion thrown out there. But I don’t think it’s the flashback of ’80s nostalgia that made Blood Dragon work. I think it was stepping outside the box, taking risks, and coming up with something radically different from anything else out there that made such a splash. When you compare the lineup of first-person shooters coming out in recent years, you have a few outliers (Bioshock for example), but most fall into the modern military “bro shooter” category. In that sea of games with coffee-stained graphics, whose gameplay elements tend to revolve around linear corridors and cover-based-shooting, the neon-infused homage/loving satire of ’80s action movie stands out like a sore thumb. Or a glorious beacon of hope, who knows.
Or, “What the hell, Samsung?”
This is the new Samsung Galaxy Gear, above the new Galaxy Note III. It’s one of the first Bluetooth-capable “smartwatches” by a major manufacturer. The picture above doesn’t make it look too bad, but take a gander at the gallery in Ars’ hands-on coverage which displays its many interface icons. White space, as far as the eye can see—the wasted space and potential are a disappointment. All for only $299, which can only pair with Samsung phones running Android 4.3. (That would be the Galaxy Note III, Galaxy Note 10.1, and after an update, the Galaxy Note II, Galaxy SIII, and Galaxy S4.)
There’s been a lot of bemused wonder at the purpose of a smartwatch; to be fair, aside from being an interesting gadget, there is less need for smart watches than smart phones. The technologist in me is really into the idea of a smartwatch—even though my avoidance of cell data contracts and hatred of major cellular providers means I don’t have a smartphone, I have enough potential devices to pair with it, and think it’s a cool idea for a gadget. Something everyone needs? No. Would I consider a smartwatch when my ancient Casio solar atomic watch, a Christmas gift from my parents back in 2004 or so, finally kicks the can? Hell yes. (Provided these future-smartwatches come with bands capable of encircling my gorilla-size wide wrists.)
As the 2011-model iPod Nano proved, and the Pebble e-paper Smartwatch seconded, there’s a market out there of people who want to wear their tech on their wrist. The problem with the Galaxy Gear is that the previews show it almost devoid of aesthetics, a simple black interface that happens to display weather and emails and who’s calling you. It comes with a number of features, including a pedometer, music player, and a voice recognition system (that reporters say leaves much to be desired). The Kickstarter-record-breaking Pebble is looks like a better buy: it has longer battery life (7+ days instead of “about one”), pairs with almost any device, and costs half as much as the Galaxy Gear. The difference is, Pebble uses an e-paper display (much like the basic Kindles and Nooks) as opposed to the Gear’s bright OLED, and is button-operated rather than multi-touch.
This would be part of the problem: concept art from earlier in the year of what a Samsung Galaxy watch would look like, based on various patents and documents leaked into the tech sphere. Flexible glass, a code Samsung’s been trying to break for ages, with a beautiful design and colorful Android-style interface. That looks pretty damn hot; it’s both fashion-forward and technologically-forward. I’d consider spending $299 on that; the form factor is impressive, the design is elegant, and the display looks bright and versatile. What reality revealed was nothing as impressive, and those expectations came crashing down.
Meanwhile, Google’s been planning a smartwatch for over a year, since it purchased WIMM Labs. My assumption is that the Google watch will be like its Nexus products: capable of pairing to just about any Android device (possibly even Windows Phone/iOS if they want to expand its market share), running a stripped-down version of Android 4.3 Jelly Bean, making for a colorful, stylish OS. It’s in Samsung’s best interest to only support Samsung devices; Google, father of Android, would need to pair with all of them, save for walled-in versions like Amazon’s Kindle. The possibilities are pretty cool: NFC/Android Beam to trade images, contact VCards, etc. for one; swiping your wrist/Nexus Watch to do Google Wallet purchases at point-of-sales machines for another. Samsung has many of the same standard Android features integrated into the Gear, but Google has a tendency to undersell the competition to get product out to more consumers—and, by proxy, acquire more search data.
Apple’s been rumored to have an iWatch in the works for years. My expectation there is something higher-priced (though still cheaper than the Galaxy Gear) due to its focus on “a single slab of sculpted aluminum” and all that Apple design elegance. (The iPod Nano only cost $149, but if Apple priced the iWatch at $200 they’d still sell like hotcakes.) It’d make sense to market them in the same colorful spectrum as the 2012 iPod line, something Saumsung’s trying with Galaxy Gear (though its colors are a bit more cartoony). I wouldn’t also be surprised if an iWatch had the same minimalist interface as iOS 7, and (only half joking here) if it came with it’s own proprietary wireless connection (to match what Apple’s done with physical connections, ala Thunderbolt, Lightning, 30-pin, etc.) that theoretically offers similar or superior performance, but restricts use to the Apple environment. Solving a few minor quirks of the iPod Nano watch to enhance the user experience would make for a decent (if chunky) smartwatch; I expect something thinner, more colorful, and more focused on being fashion-forward than the Samsung or Google competitors.
Outside of the press getting a hands-on at the unveiling event, Samsung’s been tight-lipped so far with its Galaxy Gear smartwatch, but some commentators remain unimpressed. I know I am. It’s not bad, but it’s not the $299 knockout that would sell consumers on smartwatches. Even if the display screens were there to keep an impressive UI secret until launch, there’s still the odd folding-clasp band and the childishly-colored, rubbery watchbands to deal with. Plus, the fact that the battery life is about the same as the iPod Nano, and daily charging was one of its failings. And nobody has mentioned the real slam-dunk yet, bundling of a Galaxy Gear with compatible Galaxy phone when you pick up a cellular data contract.
Samsung may have fired the first (er, second) shot in the smartwatch war, but they’ll face some tough competition once Google and Apple reveal the designs they’ve been holding on to. And to be honest, if smartwatches don’t look better than this, I don’t see them becoming anything more than a niche product for gadget hounds.
I spent a few hours buying and setting up a new Roku 3 last night. It’s a replacement/upgrade for my old Roku XD (2050X), which was gifted to me years ago and has streamed its fair share of movies and TV, even influenced my parents into buying a Roku 2. But the first-gen Rokus are at the end of their service lives: they can’t upgrade to the newest software with its slick layout and search function that combs multiple apps; they don’t have the same impressive hardware as the new version; and the old boxes block several high-profile channels like PBS and Spotify.
The old box is going downstairs to support an older TV that gets less use—one of the great features of the old Rokus is their analog component/composite cable ports, while the newer ones push HDMI for HDTVs.
Setup is pretty easy; logging into my Roku account, it automatically downloaded all of my stations—even ones no longer supported or usable, so thanks for that—and as soon as I went into any of them, it gave me a four- to six-character passcode. Logging into that app’s website and entering that passcode was all it took to access my content, playlists, and queues. It probably could have taken less time to get everything squared away, but my desktop is at the opposite end of the house. (I avoided using my tablet because I hate typing on touchscreens and didn’t want to fight the various services when they’d inevitably try to route me to their mobile sites or apps, which was not where I needed to go.)
And I have to say, the Roku 3 is a zippy little devil. The Roku XD can chug along like a champ, but it will take a good 20-40 seconds to finish loading a channel, and often pause to buffer 720p HD content back down to two or three dots’ worth of SD quality. (No, it’s not because of my internet speed or provider; yes, it’s because my residential gateway is across the house with my PC.) The Roku 3 uses dual-band wi-fi, so the connection is stronger and I haven’t had any issues streaming HD content. It blazes through menus, opens apps/channels in seconds, and begins streaming in a snap. That’s probably because it’s powered by a dual-core Broadcom ARM Cortex-A9 system-on-a-chip, a great speed boost compared to the legacy 400mhz and 600mhz processors. Consider that similar dual-core Cortex A9 SOCs power capable tablets and smartphones like the iPhone 4S, iPad 3rd gen, and Galaxy S II; quad-core A9 SOCs are in my Nexus 7 and dad’s Transformer Prime TF201.
Roku was the original streaming box—its first name was the Netflix Streaming Box when it ran the rumor mills back in, what, 2007? Compare those images with early 2008 Roku design—and it remains the sweet spot for midrange streamers. Hardcore technologists would build their own home theater PC (HTPC) or Steam box in a mini-ATX case, or take routes as divergent as jailbreaking original Xboxes to run XBMC or re-purposing Mac Minis as media storage servers. Most video game consoles and Blu-Ray players come with app support these days for Netflix, HuluPlus, and others. Other users are content with the OEM appstores that come with their smart TV. But most people just want a plug and play device, hence the rise of set-top-boxes and media streamers, and in the olden days of SDTVs and “dumb” HDTVs, a set-top-box was the easiest way to watch Netflix or HuluPlus in the living room.
Roku has more or less won that round. Google TV went from a canned prototype device to become another app-store for smart TVs, powering OEM set-top-boxes. Apple TV remains a niche product line, a great buy for anyone plugged into the Apple ecosystem—Mirror your iOS device! Stream your iTunes media library! All on your home TV!—but has a slimmer range of content providers and less technical innovation. (The similarly-priced Roku 3 has a Micro SD slot hidden under its HDMI port, and a USB jack so you can connect a flash drive or external HD and play your own content). Start-up Boxee was recently folded into the Samsung electronics empire. The OUYA has finally hit the scene, several years and several million customers behind the rest of the set-top-box crowd, though with a principled mission of end-user modification.
Part of it is from simplicity—Roku just works. There’s even the Roku Streaming Stick, a $99 USB drive to stream your content; talk about plug and play. Part of it is from content providers; while Roku is going quantity over quality here, it’s got the grand video trifecta of Netflix-HuluPlus-Amazon Instant Video, along with music from MOG, Rdio, and Spotify, plus several other high-end apps as well: Vudu HD videos, Pandora internet radio, MLB.TV, and value-added channels for subscribers of HBO, Epix, Time Warner Cable, and others. And as the Roku 3 shows, innovation. The new menus and layout shows a wealth of content at a glance, rather than the limited “carousel” view of the 1st gens; the search function will ply through various most-used channels to see who can provide your content; the dual-band wi-fi and Cortex A9 SOC give it a speed boost, and it excels in other technological departments as well, such as the remote connecting via wi-fi (making it omnidirectional and usable in other rooms) and has its own headphone jack so you can stream at night without waking up the spouse (or the parents).
About the only things Roku ain’t got is a YouTube channel; not that I’d use it much, unless YouTube users start producing content in HD, considering I already pay for music and video subscription services. (I’d actually prefer a Last.FM station, over Pandora.) Another improvement would be a remote with a QWERTY keyboard on the back, a feature I desperately missed while going through initial setup. The new remote is also perfect for old-school gaming—directional pad and A/B buttons make it look like an NES controller when turned sideways—but apart from a few 8-bit Namco games, Angry Birds, and some trivia spin-offs (You Don’t Know Jack, Jeopardy), the Roku’s gaming experience is dead on arrival. Give it time and hopefully some enterprising developer will come up with their own private channel to run side-loaded S/NES ROMs. Roku’s Dev support has been second to none.
As much as I love Roku—not just the device, but the brand, which is very supportive of its customer base… if you discount the just-90-days warranty and abandonment of the 1st gen Rokus—I have to wonder what their future holds. Browsing the big-box stores today finds you a legion of smart TVs, and in another few years just about every TV will come with its own brand of smart TV features, which will cause the need for a set-top-box to diminish—unless I’m wrong about consumers, the streaming box is an evolutionary dead-end. Most of the smart TVs I’ve tried were as slow and sluggish as my 1st gen Roku, if not more so, and their user interfaces leave a lot to be desired. There’s a reason the Roku 3 is cleaning their clocks. It hasn’t led to à la carte TV channel subscriptions—you need an HBO subscription through your cable/satellite provider to use HBOGO; and they wonder why people pirate Game of Thrones?—but it has led to a dead simple way to get your subscription streaming services on a dumb TV.
Still, smart TVs will just keep on improving as they become more and more commonplace; Roku can’t hope that smart TVs will be stupidly designed forever. I have the feeling that in another few years, all Samsung smart TVs will come with their own Boxee app center, their own Boxee circuit board bolted to the back. I could see Roku going in that direction, getting picked up by some HDTV manufacturer who wants to weld existing streaming technology to their existing line of HDTVs. That, or Roku can keep surprising me with added functionality and improved hardware, and take their box in new and impressive directions.
There’s a reason I’m so hard on games like Metro: 2033, Far Cry 2, and even Bioshock 2: it’s because of games like Bioshock: Infinite. Games that are not only a step above the herd, but games that raise the bar of excellence and become milestones in their genre.
That’s not to say that Bioshock: Infinite is perfect. It leans on the short side—even mining it to glean every secret I could, I barely topped 14 hours, and most players I know beat it in 10-12. Why do you acquire superhuman powers (vigors) from vending machines? Because Bioshock had plasmids, and System Shock had psionics before it; there’s no story reason for them this time, though, so they feel tacked-on to keep the game mechanics symmetrical to its predecessors. The ending of the game is a cerebral mindfuck, which is good, but it’s on rails, which will bother some players, and is… well, for the sake of spoilers, let’s say not the most upbeat of circumstances. (Though it’s generating plenty of discussion, which I’m sure was the devs’ intent.) And this is a game whose story excels to the point where the combat mechanics look underwhelming in comparison, where each new combat situation is shorter than the last due to my upgraded abilities and weapons, simply filling time before I’m off to roam the next impressive environment to find more clues about the story and setting.
The sad part about those environments is that, in our age of Elder Scrolls and Grand Theft Autos, we expect—want—demand a game to give us open sandbox freedom to explore every square inch. We can explore all we want here in the flying city of Columbia, but its’ narrow alleys and locked storefronts offer limited potential. What we do see is a gold mine of creativity, beautifully rendered in stunning graphics. We have stunning vistas of city blocks rising and falling, the hectic chaos of riding on a metal skyhook on rails during some of the more impressive combat set-pieces, and one of the few new worlds of gaming that’s shockingly original. But I can’t help but wish that a company like BethSoft would come up with a new, original idea half this creative for their next FPSRPG sandbox epic; this world has plenty of depth, but I want to get lost in it and can’t.
By which I mean, Borderlands 2 has fun environments to explore, but c’mon: it’s all rocks and rusting debris and one glittering robot metropolis and you can’t beat a flying fucking city, you just can’t. Meanwhile, Bioshock: Infinite is an illusory sandbox whose freedoms don’t live up to the expectations its trailers set. It creates those restraints to hone in and focus on its narrative, its story.
The story is what carries the game with the critics, earning its nods and speculation as Game of the Year. It’s immersive, a pressing mystery you and your character have to solve with clues and foreshadowing scattered throughout Columbia’s environs. It’s a highly cinematic game, meaning it’s a tad linear (hence the lack of free-roaming environments) and sticks you in a certain character and set-piece situations. It’s also the best in that field since Half-Life 2. Unlike Half-Life, the protagonist has dialogue, and it’s more than the set-pieces which are scripted; there’s a lack of choice there, but more feel of character and story. Really, it’s trying to re-shape the “game” element into something we’re more used to experiencing as “movies” or “books,” and bind that narrative into the game medium using twitch shooting. Bioshock: Infinite is either representative of the future’s more interactive narratives (“played any good books/films lately”), or it was constructed using the wrong medium.
The reason Bioshock Infinite is a milesone is because it challenges some stereotypical assumption we have about games, re-defining ideas. That shooters can have a better story than combat. That escort missions don’t have to be a chore, and that secondary ride-along characters can be deep and interesting, helpful gameplay-wise, even highly likeable—after this, I foresee secondary/support characters being approached from new directions. And that asking deep questions—about concepts as deep and heady as philosophy, choice, freedom, predestination and fatalism, fate and free will—nothing that dense had really been tried before, so no one knew if it was something gamers will embrace or than shy away from. Instead, they took to the forums and Youtube and created a massive free-flowing dialogue of its elements, looking closely to point out intricate details such as the Lutece connections and the various hidden sounds in the game.
I realize I’m saying a lot about the game without really saying anything about the game; it released months ago, so unless you’ve been hiding under a rock I’m going to assume you’ve already played it, or have read/watched a review—there are many others more qualified than I to review it. For the most part, the game is a bestseller with high critical praise, but there are several key complaints about trying to write a linear narrative in game form, even some downright hostile criticism. I agree with some of their elements, but am willing to look past them for two reasons. First, the game itself is an addicting experience, sheer brilliance tarnished by a few critical flaws; it grabs you with its immersion and doesn’t let go until the ending. Second, looking past it is looking ahead at how its styles and techniques could impact the gaming medium. Bioshock: Infinite could be a one-off that makes some waves but shows that a more linear/cinematic story-driven game an evolutionary dead end, or it could be a half-formed stepping-stone which foreshadows a paradigm shift in how game developers integrate narrative into gameplay.
Most of all, I’m curious where its DLC will take the game, given that the ending had both a sense of finality along with its open-to-interpretation uncertainty, a lack of clarity that helped opened the pandora’s box of discussion.
The backer version of the OUYA finally shipped, and saw a couple of reviews go up on the sites Engadget and The Verge. Both were a bit… less than favorable; I’ve been waiting to see if another review will emerge—maybe a positive one to contrast with—but so far I haven’t seen anything more substantial than a positive tweet or blog post. The most positive I’ve seen was at Joystiq, and it admitted the OUYA “is not ready for primetime” and that “people want a sure thing.”
Granted, I know a lot of people will discount the reviews off-hand, with something like “Well, of course it, it wasn’t an Apple product, of course they’d hate it.” While the sites have a little more leniency with Apple products, I’ve noted they can be predisposed to ranking other products ahead—the new Roku 3 getting better remarks than the Apple TV, for example. But I think it reveals a little more of the underlying reasons why I’ve been so hesitant to buy into the OUYA hype: I’m surprised as heck that they managed to get it out by their expected March production dates, but as the reviews show, it’s somewhere in the Alpha build, or very early Beta.
I’m getting the sense that people are liking it more for its hackability as a home media center—this generation’s XMBC, using off-the-shelf cellphone parts and a forked version of Android—despite the product being pitched as an alternative for the indie gamer who doesn’t want to shell out the big bucks for the PS or Xbox systems shipping in the next few years, or who’d take the time and effort to make their own STEAM box. It’s for enthusiasts who get hyped up about Raspberry Pi, not the mainline gamer. (While Pi is actually pretty cool, most people have no use for it on a daily basis.)
Which is fine; different strokes for different folks—I’m lazy and my Roku works fine for me, though I wish the originals would get the new hotness update instead of that shitty old carousel display that wastes space, and better support for USB devices and various audio/video codecs would be nice, to use the content on my external drive. Anyways, I’m getting sidetracked.
As a game system, the reviews point out the same reasons I was hesitant and didn’t go with the Kickstarter after all. First, Google’s had enough of a hard time getting people to design apps for 10″ tablets (again, plenty of apps on my Nexus 7 that scale for shit can attest to that); getting people to design apps for a 50″ HDTV would be nightmarish. Plus, OUYA doesn’t have the benefit of hiding its push behind the weight Google’s throwing around, since they’re using their own appstore environment. So not only do they have to convince developers to release another version of their app—bigger, stronger, and free-but-not-really-free, you’ll have to re-purchase all the apps and games you already bought for your phone or tablet. Sigh.
Next, that app needs to run fine using the technology and parts used to power smartphones. Next, smartphone parts become obsolete every 6-10 months; the tech in the current OUYA is already falling behind. So OUYA is coming out with yearly $99 releases; ok, that keeps the tech nice and current, but the cost is becoming more of a burden at that point. More than one and you’re already past the cost of a Roku; more than two and you’re past the cost of a decent Xbox 360; after five or six years of updates, you’ve just spent as much as you would on a next-gen Playstation, Xbox, or a solid mid-range gaming PC. Is $99 a pop expensive? Kinda, but in the grand scheme of technology, not really. Do you have to buy a new one every year? No, but I’d be surprised if the speed at which mobile gaming increases doesn’t necessitate purchasing a stronger OUYA every 2-3 years.
I like the implementation of the menu system as shown thus far, though I hope they patch over the bits of stock Jelly Bean sooner or later with their own stuff. And the promise from developers to make titles just for OUYA is a step in the right direction. Maybe it’s just my hesitation to be an early adapter, because early adapters don’t come in to the established, full-fledged experience; case in point, compare the Blackberry 10 or Windows Mobile app stores to Google Play or the iOS Appstore. Maybe it’s just that I’d rather put up with bad control schemes to play the same games on my mobiles because they fill a certain niche (killing time on the one electronic device most people consider essential), and would rather play the grandstanding, mainline titles—the Haloes, Bioshocks, and Elder Scrolls of gaming—on my TV or computer. Whatever the reason, I’ve actually been losing interest in the OUYA the closer it comes to mainstream production, and feel kind of bad about it.
I know it’s old news, since it came out just over two months ago, but Apple came out with their first high-capacity tablet: an iPad with 128 gigabytes of internal storage. Compared to personal computers, that doesn’t sound like much space, but most tablets and smartphones have wallowed in the 8/16/32/64 sizing mire, where the 8 gig isn’t terribly useful, the 16 utilitarian, and the 32/64 pretty expensive. Needless to say, the big iPad generated a number of articles and arguments before being forgotten after about a day… except by me, who didn’t have time to finish writing this until long after it stopped being news.
Three points I’d like to touch on.
First, the price is definitely high, but not at all unexpected; Apple has a habit of making products as good or slightly better than the competition, months before any serious competitor can match their marketing and promotional machine, much less their supply/demand, and then sell it at a 30% markup. Then, they come out with incremental, yearly updates, and have everything fall into a very specific pricing hierarchy where the next step up is around $100 more than the one below it. $499 for 16gb, $599 for 32gb, $699 for 64gb, ergo, $799 for 128gb.
$799 for a wifi-only tablet is high: you could get, what, three Chromebooks for that price, or a decent Windows laptop, a solid midrange-plus desktop, one of the better Mac Minis, or both a Nexus 10 and Nexus 7 (both 32gb models). Actually, that last option isn’t a bad way to spend a chunk of change. And the Macbook Air is just a short half-step, price wise, above the $929 wifi + cell data 128gb iPad.
But unlike its competitors Google and Amazon, Apple doesn’t subsidize its products to make bank on easy access to an online marketplace—Apple is selling an image, and the Apple Tax helps build that reputation by making the products more exclusive. (Creating both the hardware and software can’t hurt; where else are you going to go but Apple if you want to get upgrades for your Mac? Check those prices on RAM and the new hotness Fusion Drive, please.) Plus, miniaturization of a 128gb SSD can’t be too cheap; it costs $80-140 to get a PC-ready internal SSD, and the tablet model would have to be notably smaller, thinner, and lighter.
All in all, it’s actually cheaper than its competitors ($1,049 for a Acer Iconia W700, $1,299 Razer Edge Pro, the Samsung ATIV price-slashed from $1,929 down to $1,300-something)… but note that all of those are running Windows 8, and desperately trying to be tablets, computers, and gaming consoles simultaneously. If Google/Samsung kept their pricing structure, a 128 gig Nexus 10 would be $699, which isn’t that much cheaper than the iPad pricing, considering the comparative lack of Android apps designed for large screens. (I’ve found plenty of awful looking apps on my 7″ Nexus, and I can’t imagine how bad they’d look on a 10″ one.)
Second, more storage is very much a good thing; in another five years, we’ll all look back and laugh when we think about computers that don’t have a terrabyte or more of Solid State storage. The main criticism about a 128gb iPad, other than the price, was “What are you going to do with all that storage?” which I think is a very backwards-looking question. Yes, I can only see around two types of users who’d have need for that: consumers who want to have their entire music library—or a large part of it—on their tablet; and the few professionals who use the top-end CAD and graphic design software. The release saw quotes from the AutoCAD iOS app developers, and it’s their kind of product that would really shine on a large-capacity tablet.
So, the big benefit I’m seeing is that it opens up the floor for use by more professionals—not just thinking corporate/Enterprise users, but anyone who use high-end software for engineering, graphic design, 3d modelling, etc. Now that there’s more storage, those big apps with big files aren’t as much of a problem. And while I’d personally dread to use most of them on a touchscreen tablet, I can see the appeal of putting CAD, Adobe CreativeSuite, Maya, etc. on a device that’s twice as light and portable as your standard laptop. (Besides, if it works for people in science fiction films, professional tablet computing can work for us, right?)
So, the detractors are saying “too bad there’s not many programs on there taking advantage of all this”—and by that, I also refer to the 4th generation iPad’s processor and graphics, which benchmark high on the current round of tablets but really don’t have anything in the app store to put that much strain on the hardware. On the other hand, there’s the a hopeful “well, it means the doors have been opened for others to develop bigger, more demanding apps to take advantage of the disk size and hardware.” Whether they do or don’t, we shall see. The option is there, but designing those high-end programs to function gracefully on a 10″ touchscreen is a whole ‘nother matter.
Third, thanks to the rapid developments in cloud computing, I do have to question the values of physical media in our age of cloud computing and cloud storage. On the one hand, remember the embarrassing iCloud outage about a month after the 128gb iPad was announced? On the other, something I’ve learned from personal experience:
I have a 32gb Nexus 7 (wifi model), and a 64gb iPod touch 5th gen, both of which I use with regularity. (I wanted devices that were ultra-portable and lacked an expensive cell data contract; the Nexus fit my bill for a combination Android gaming platform, e-reader, and digital RPG assistant; the iPod fits in my pocket, stores most of my music, has an insane battery life, and it’s blue.) Both currently have around 20 gigs of free space, after formatting and loading them down with apps. They both store around 30 gigs of music, a little over half my music library, but in different ways. The iPod is packed because it has to hold all the song files on its drive; meanwhile, you’re probably wondering how the Nexus 7—with only 27.85 gigabytes of storage after formatting—can hold 30 gigs of music and still have other apps installed and 20 gigs of free space.
The answer is cloud computing: Google’s Play service hosts all the tunes you buy through them, uploads the music library off your computer, and stores it in the cloud to stream back down. My 30 gigs of music—plus some things I bought off Play but haven’t put in iTunes yet—take up not a single byte of storage on my device. Free of charge, I should add; Apple has their own cloud music storage system, iTunes Match, but in comparison it’s not very good. It does store more tracks, and in higher qualities, but it doesn’t allow you to stream them back down out of the cloud: instead, it plays them while downloading, and then they’re back to taking up space on your hard disk. So I guess it streams them once, while downloading. Not an ideal solution, especially if you’re the poor sap who thought they could get by with an 8gb iPhone. In short, it stores my songs, I stream them back down to play, and my tablet has both all my tracks and loads of free space.
My gut says physical storage will remain the preference over cloud computing, especially since there’s a number of issues with it—if I’m not at home, or within range of a free wifi network, my awesome Google Play library is worthless. A server outage would wreck me as well, and if Google somehow went under, I’d have to find another service to dump all those tracks into. Part of the reason I picked up the iPod was because of this very same issue: the music is on the device, so I can go for a walk around the neighborhood without worrying about stopping the rock because I went off the network.
But, given the way our technology is developing, I think cloud computing will eventually overtake physical storage when two criteria are met. First is that they’re more stable and secure; they’re no use if they go down for more than ten minutes. The more important second is the prolific and widespread use of wifi. Most restaurants, malls, and even businesses have realized that, in our technological world, it’s considered an essential service to provide free wifi, and do so along with restrooms and free water and decent lighting. You get free wifi at your local library, or in the hospital waiting room. We already have cars that function as wifi hotspots as well; I could use it to connect my Nexus and stream my MOG, Pandora, or Spotify through the car speakers via Bluetooth, a sentence I’m pretty sure would have confused my grandparents. Give it another 10-15 years, and we could see wifi provided as an essential service everywhere: grocery stores, gas stations, you name it. Possibly a replacement frequency of some kind, combining the strength and stability of wifi (or Bluetooth for that matter) with the wide coverage of a cell data network. We’re nowhere near that yet, but if I had to guess about our future, I’d bet it all on ease of access to the internet. The growth we’ll see in the next decades will probably floor us.
When we reach the egalitarian science fictional utopia where our cities beam out free network connections from every streetlight, that’s the point where we can take physical storage out behind the shed. I’d take that over moving sidewalks any day… but the flying cars, now, that’s another story.
I have to say, the trailer to Far Cry 3 makes me really want to run out and pick it up. The depth and breadth—not to mention the various critics’ praise—shown in the ten-minute teaser trailer is impressive. The single-player story looks cool, the environments look immersive, and the leveling up tattoos and gun modifications look amazing. But I’m still a little hesitant; it looks too much like an improved Far Cry 2, and Far Cry 2 was one of the most disappointing games I’ve ever played.
Far Cry 2 didn’t try to follow in the original game’s footsteps, other than having lush, expansive environments to drive around in, which wasn’t necessarily a bad thing to focus on; they even expanded the amount of sandbox, putting Far Cry 2 on the Elder Scrolls/Grand Theft Auto level. It took place in a war-torn African nation, playing a mercenary stricken with malaria who was after the arms dealer supply both factions in the war. With an arsenal of weapons, it’s your job to take on the two factions and fight your way through the jungles to kill the arms dealer. Sounds all good so far, right?
Well, the actual gameplay is very hit or miss. Some of the game’s mechanics are great, some of them terrible. Most of them are just bland and irritating.
- The setting is just plain huge, covering deserts, plains, jungles, slums, all the key African environment types. And when I say huge, I mean fucking gigantic: the map you start out on is bigger than most sandbox games, and then halfway through the game another map is unlocked, doubling the world size. The day-night cycle is impressive, and goes by at a decent pace. And those expansive maps are filled with lush vistas and really cool locales, even if the plants can be rubbery at times.
- The problem is, it’s empty, and infuriatingly boxed-in despite its size. Big and expansive with not a damn thing to do it in, except at the scattered waypoints on your map. There’s no real wildlife out there, and except in the wide-open plains and desert areas, you’re restricted to either a.) traveling by foot, which takes YEARS for you to cross the map, or b.) driving along rivers and dirt tracks through the jungle.
- The problem with the dirt tracks and roads is that there are faction checkpoints every mile or so filled with troops that you have to kill. Between the checkpoints are roving vehicles on patrol. Either way, you have to stop to fight after about five minutes of traveling, which gets monotonous and repetitive. Worse, it takes the focus away from the gorgeous scenery.
- Respawns! When you leave a checkpoint after blowing it up and looting it, walk five feet out of view, turn around and go back because you’re short on grenades, bam reinforcements have arrived. There is no sense that anything you do impacts this world, since the checkpoints and road patrols regenerate some ten seconds after you get out of view. This also works in the inverse; walk too far away from a vehicle and it vanishes.
- Enemy AI! This exists only in on/off form; either they’re shooting at you, or they’re milling about. If you silently kill one of them trying to be a stealthy sniper, they all see you and open fire within minutes, and have the accuracy to shoot the ass off a gnat at half a mile. Enemy tactics revolve around a.) seeing you regardless of how well concealed you are, b.) shooting way more accurately than you do, and c.) running wildly in your general direction. Gone are the tactical geniuses of the original Far Cry; these guys are just dumb mooks with superhuman accuracy and x-ray vision. Their vehicles drive faster than yours, too, which makes chases not very interesting.
- Realism! Your weapons will degrade over time, so you have maybe three or five pitched battles before that awesome assault rifle blows up in your hand. Pick up an enemy weapon and it blows up even quicker, because these sniper ninjas use garbage equipment. The same thing happens to your vehicles; they can take scant little damage before their engines start smoking and you have to hop out and crank the fix bolt a few times to repair it.
- Malaria Outbreaks! As part of the realism, now and then your screen turns sepia-toned, and you have to travel back to the center of Map A to restock on medicine. If you don’t, you become sick and wobbly-cam ensues.
- Vehicles! Handle like overladen shopping carts, and can accelerate from 3 to 30 in about a minute. They are made of glass, except for the special Unimog armored truck (which is made out of particle board). There’s also a hang glider, which is neat, and boats, which are not.
- Fire! Okay, this was pretty cool: throw a Molotov cocktail or set off an explosion and the world catches fire. It spreads out to a certain radius and stops, but it looks cool and can be really helpful. The plains burn especially well.
- One of the great ideas the game brought in was the allies system; you’ll have some mercenary allies who’ll ask you to do sidequests, and will offer alternate routes for doing main quests. Do enough of those and their friendship bar will increase; then, when you’d otherwise be shot to death, one of your buddies would arrive to pull your ass out of the fire. Way cool. What made it better was when your buddy went down trying to save you, requiring your medicinal syrettes for healing or a mercy-kill overdose if they’re too badly injured. Added a bit of depth to a game that sorely needed it.
- The game uses conflict diamonds as currency, which is a fantastic piece of flavor.
- The diamonds are used to unlock weapons, then to buy and upgrade them. There’s a wide and interesting selection of killin’ utensils, but the best options are if you preordered to get the bonus DLC. At any one time, you get a selection of grenades and Molotov cocktails, and get to pick between one light weapon (pistols, machine pistols, a flare gun, an M79 grenade launcher, or a sawed-off double-barrel-shotgun), longarms (a variety of assault rifles, a shotgun or two, and a bolt-action sniper rifle), and heavy weapons (an RPG, a crossbow that fires explosive bolts, a flamethrower, a mortar, machine guns, and a recoilless rifle). The upgrades only modify accuracy, damage, and reliability (how long it goes before it blows up), and are a nice touch, but I’m not actually sure how much they improve anything by.
- The designers tried to institute some story-oriented missions and sidequests, but these are so predictable that they become boring as hell. Standard mission setup involves you traveling to the other side of the map, emptying out all the checkpoints in between, killing everything at the mission location, then clearing out the checkpoints on the way back, to get paid with a few conflict diamonds. Missions include “blow up the convoy looping endlessly from point A to point B,” “kill dude X for the dude at the cell towers,” “kill things so the gun shop unlocks more gear,” “kill everything at location Z and bring back the area’s macguffin,” etc. Imagine a half-dozen copy/pasted versions of those mission types and you get the picture.
- The main missions weren’t much better, and only a handful fall outside the most generic mission types. In truth, there isn’t much of a story, just an endless series of repetitive quests similar to the sidequests. Lord, it’s like the world’s emptiest MMORPG. The exceptions were hunting for secret diamond caches (a scavenger hunt), and unlocking safehouses (you’d bump into them, kill the occupiers, and have a save place to sleep at night). Both were useful and entertaining pursuits, from an exploration standpoint, if a little gamey.
- TL;DR: To complete one mission, you’ll most likely have to end up driving for half an hour, adding ten to fifteen minutes to the trip for each checkpoint you run. After spending the better half of an hour to get to the mission, you fight more mooks—just like at those checkpoints, but at someplace bigger!—then fight your way back through that half-hour drive and all those refilled checkpoints. Rinse, wash, and repeat for six billion identical missions.
- Did I mention that no matter whose side you’re working on, all the faction checkpoints will open up on you at first sight? The game tries to play it off as you being some undercover operative on a secret mission for the head honchos so they can’t tell their hired mooks not to shoot at you, but it feels like lazy game designers.
So, some interesting features, fantastic immersion, great graphics, and a number of serious, critical flaws that ruined the entertainment value of the game. I get the feeling the developers were trying to make a first-person Grand Theft Auto, which they set in the African equivalent of an Elder Scrolls game. But they did so without understanding what made those sandboxes fun. Most of all, those worlds were packed with interesting locations, NPCs, and missions, things that Far Cry 2 sorely lacked. In a way, it was too much sandbox, not enough content. And your actions had next to no impact on the cookie-cutter world.
Some people really liked Far Cry 2. The critics loved it; just look at its Metacritic rating to see the difference between “critic” and “corporate shill.” I stopped playing after some twenty hours and went off to beat Deus Ex: Human Revolution. Maybe things got better on the second map, but I was just too bored—it’s not a game, it’s a slog. Compared to sandboxes like Fallout: New Vegas, Just Cause 2, Saints Row 2, Oblivion, Skyrim, even the STALKER games (limited sandboxes though they are), I just didn’t find Far Cry 2 entertaining or redeemable. Hence my apprehension at the third game in the Far Cry series, despite the trailer’s lovely promises. Won’t get fooled again.