Virtual Reality is now… a reality! Apart from issues shipping the things out to their anxiously awaiting fans, the Oculus Rift CV1 and the HTC Vive are now in the hands of consumers ready to experience the closest thing we have to the Matrix. Or Sword Art Online. I am one of those fans, having been in possession of a HTC Vive for a few weeks now and can give my thoughts on preparing for VR and using VR. Prior to getting the Vive I was upgrading my PC, not only in preparation for the Vive but also to play games at higher settings with smoother frame rates.
Two days ago I finally received my parcel containing the Moto 360, Motorola’s first Android Wear smartwatch. This is a quick first impressions post, but basically I really like it so far… and you probably shouldn’t get this. It’s definitely a first adopter product.
Why did I get a Moto 360?
As some of you may know, I have continued to wear watches even as smartphones have become ubiquitous as our new timekeepers. It partly stems from a nervous tic I had of glancing at it, and even after suppressing that tic I still feel uncomfortable without a watch when I leave the house. So it seemed like a no-brainer to get a watch that had more functionality than my current watch (which has a whopping 2 complications – timer and calendar date). The Moto 360 stands out of the current line-up of Android Wear watches for being circular, and for being charged with Qi Wireless Charging. The latter is useful since I have a number of Qi chargers already and a Qi power bank.
How is the Hardware?
It looks really nice. With a circular metal rim only interrupted by a single button, it definitely looks more like a watch than any current or announced Android Wear device. The screen, while not as pixel dense as this generation of smartphones, is good enough that the watch faces don’t have an noticeable screen door effect. The in-built light sensor seems to be essential for a smartwatch with colour display, I’m not sure why other Android Wear devices have not included one. The back of the watch is unfortunately plastic, which was perhaps a necessity due to the heat generated by wireless charging, but there have been a few reports of cracking due to the Moto 360 having a circular protrusion over where the watch straps connect to the watch, rather than a flat edge. However, it does still feel nice to wear.
The size of the watch is one thing that has been discussed with concern. It isn’t as big as promotional pictures make it out to look like, but it isn’t anywhere close to the size of a usual woman’s watch, which may dissuade female purchasers. On my wrist it doesn’t look out of place, and compared to my current Swiss Military watch it is only a few millimetres larger at most.
The other thing is battery life. I haven’t had it long enough to test, but with the latest update people are reporting up to 2 days of charge with the Moto 360. This probably won’t be an issue for me as I’ll charge it every night (with wireless charging it’s as easy as dropping it onto the cradle), but obviously for a watch that is a very short battery life. The Pebble in contrast offers up to 7 days per charge, though it uses a monochrome e-ink screen. To achieve these battery savings, the Moto 360 will detect the orientation of the watch. When it detects that it is not facing your face, it will turn off the screen (this can be toggled). Face up, it will show a dim screen with the watch face (but no moving second hand); when tilted towards you it will fully light up and also show notifications. For the most part it works well.
How is Android Wear?
Android Wear is clearly still in development. There is still no official API for watch faces, and despite Google warning people not to make watch face apps until they do release that API, people have created apps that do just that. I haven’t dabbled in them yet, but there are watch faces that let you put the Goldeneye 007 watch interface or a Pipboy-style watchface on your watch. Notifications on your phone will appear on the watch, along with Google Now cards that are relevant for you at the time. For example, being near a bus stop will show you the bus routes that go through it. Some notifications will support actions on your watch, like pausing and skipping music, or replying to a Tweet.
Of course, with a small screen and no keyboard, the only way to write text is using voice commands. While Google Now has really good voice recognition, it still has its limits. You can use voice commands to do other things, like show your heart rate (the Moto 360 has a built-in heart rate monitor) or record a note. You can do some of these things by swiping up while the Google Now voice prompt is on-screen, and then selecting the command from a list, but it is not as easy as having an app drawer.
This is related to the issue with Android Wear apps that don’t show up as notification cards. IFTTT and Unified Remote (beta) have Android Wear support, but to get to those apps you need to either say “Start If This Then That”, or go to the bottom of the command list and go in to the “Start…” menu to open the app. It’s a minor hassle, and probably not one easily solved. I think that even with it’s crown and app drawer, the Apple Watch may not be the solution to this either. At least, when you do open IFTTT or Unified Remote, you can shut down your PC or mute your phone with the press of your watch. It is truly the future.
There was also one time when the watch hitched up, and refused to respond for 10 seconds or so. Not sure why.
The Moto 360 also has an advantage software-wise with Moto Connect, which allows you to customise the official watch faces that do come with the watch. There is a decent selection that you can use the Moto Connect app on your phone to change colours and sometimes the functionality of the watch (e.g. add extra time zones for multi-dial watches).
Right now the Moto 360 offers enough functionality for me to easily use it as my regular watch. But I already wore a watch regularly, and having an extra thing to charge daily is not a hassle for me. But this is not going to make you want to wear a watch, yet. The future holds more updates for Android Wear, which are rumoured to include the ability to store and play music on the watch without a phone connected, and support for NFC (so if Google Wallet finally becomes widely available, you might be able to pay with your watch like with Apple Pay).
I am not a graphics programmer. I have only passing knowledge of the graphics pipeline and could not code a shader to save my life. But as a gamer I am fascinated by the research behind anti-aliasing techniques, and so it was with great interest I read on Neogaf about the new AA technique being used in Farcry 4, Hybrid Reconstruction Anti-Aliasing, presented at SIGGRAPH 2014. Of particular note is that this technique performs a little better on AMD’s latest graphics architecture, GCN, which is present in both current generation consoles. It suggests that, given the console focused development of most AAA game studios (and game engine studios), the next few years could see more research into graphical techniques optimised or more performant on the GCN architecture, and thus influence the direction of high-end graphics performance on the PC.
A Comparison of Anti-Aliasing Techniques
This Reddit post gives a pretty good explanation of the differences between the most common anti-aliasing options used up to now. The main gist of those differences that I want to point out is that AA techniques usually either work during the process of creating an image from the 3D objects, or on that rasterised image just before it is output to your display. The former will typically change the colour of a pixel in the image to be output, while the latter will usually apply a blur effect where it detects that there is an edge between one object and another.
Most games in the past few years offer a choice between the two major camps of anti-aliasing: Multisampling AA (or one of its related techniques – EQAA or CSAA) and a morphological AA technique, usually FXAA. FXAA was the cheaper option in terms of hit to graphics performance, and usually with a negligible visual difference to MSAA (Jeff Atwood of Coding Horror wrote a piece on this in 2011). After Atwood’s post in 2011, Subpixel Morphological Antialiasing was introduced by a team from Crytek and Universidad de Zaragoza. SMAA gained traction as a better looking alternative to FXAA after the movie on the SMAA website made the rounds on gaming forums and news sites, and an SMAA injector was released.
There is also the parent technique of MSAA, aka the grand-daddy of AA, which is Supersampling (SSAA), and is typically offered by games optimised in some way for PCs (e.g. The Witcher 2, Metro: Last Light). Because it essentially renders the game at double or quadruple the display resolution, it will destroy most video cards (figuratively speaking). Finally, TXAA is another family of anti-aliasing which is restricted to Nvidia cards (though AMD has an equivalent). Since I’ve not had an Nvidia card recently, I can offer little comment about how it looks.
Hybrid Reconstruction Anti-Aliasing
HRAA is a technique that appears to (again, I’m no graphics programmer) mix temporal anti-aliasing (TXAA is one such technique) with two other families of anti-aliasing: coverage-based anti-aliasing and analytical anti-aliasing. Temporal anti-aliasing analyses frames before the current frame and ensures that the colour of a pixel makes sense when moving from the last frame to the current frame. Coverage-based anti-aliasing is related to MSAA (CSAA is a coverage-based AA technique), and differs in that (basically) it looks at more 3D triangles that overlap and neighbour a particular pixel than MSAA does. Finally, analytical anti-aliasing looks at the distance between the pixel and the nearest vertical and horizontal edges.
The performance cost of HRAA is relatively low, greater than FXAA’s, but less than MSAA. However, the latest Nvidia architecture, Maxwell, removed support for CSAA and potentially may not be optimised for HRAA because of the lack of requisite direct access to coverage buffers in the graphics pipeline. This is the most interesting thing I took away from this is that potentially as we progress through this current generation of consoles the most impressive techniques for improving visuals will be optimised or restricted for the graphical architecture of the consoles – AMD’s GCN. That’s only my amateur observation though, and it’s not as if Nvidia will not pursue their own research on realtime rendering techniques and other parts of the game engine. Nvidia still has significant control over the physics middleware market with PhysX (with some impressive flying debris in the latest Batman games), have responded to AMD’s TressFX with HairWorks, and has previously developed TXAA and HBAO+. But for the next few years the largest proportion of graphics programming work in video games will be done on an AMD architecture.